00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1035 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3702 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.178 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.488 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.500 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.511 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.511 > git config core.sparsecheckout # timeout=10 00:00:06.522 > git read-tree -mu HEAD # timeout=10 00:00:06.537 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.582 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.582 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.706 [Pipeline] Start of Pipeline 00:00:06.717 [Pipeline] library 00:00:06.718 Loading library shm_lib@master 00:00:06.719 Library shm_lib@master is cached. Copying from home. 00:00:06.730 [Pipeline] node 00:00:06.750 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.751 [Pipeline] { 00:00:06.759 [Pipeline] catchError 00:00:06.761 [Pipeline] { 00:00:06.776 [Pipeline] wrap 00:00:06.786 [Pipeline] { 00:00:06.792 [Pipeline] stage 00:00:06.793 [Pipeline] { (Prologue) 00:00:06.810 [Pipeline] echo 00:00:06.811 Node: VM-host-SM9 00:00:06.816 [Pipeline] cleanWs 00:00:06.825 [WS-CLEANUP] Deleting project workspace... 00:00:06.825 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.831 [WS-CLEANUP] done 00:00:07.030 [Pipeline] setCustomBuildProperty 00:00:07.117 [Pipeline] httpRequest 00:00:07.570 [Pipeline] echo 00:00:07.572 Sorcerer 10.211.164.101 is alive 00:00:07.581 [Pipeline] retry 00:00:07.583 [Pipeline] { 00:00:07.595 [Pipeline] httpRequest 00:00:07.599 HttpMethod: GET 00:00:07.600 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.600 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.604 Response Code: HTTP/1.1 200 OK 00:00:07.605 Success: Status code 200 is in the accepted range: 200,404 00:00:07.605 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.494 [Pipeline] } 00:00:08.511 [Pipeline] // retry 00:00:08.518 [Pipeline] sh 00:00:08.853 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.867 [Pipeline] httpRequest 00:00:09.667 [Pipeline] echo 00:00:09.672 Sorcerer 10.211.164.101 is alive 00:00:09.728 [Pipeline] retry 00:00:09.733 [Pipeline] { 00:00:09.755 [Pipeline] httpRequest 00:00:09.762 HttpMethod: GET 00:00:09.763 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.763 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.786 Response Code: HTTP/1.1 200 OK 00:00:09.786 Success: Status code 200 is in the accepted range: 200,404 00:00:09.787 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:04.693 [Pipeline] } 00:01:04.713 [Pipeline] // retry 00:01:04.722 [Pipeline] sh 00:01:05.040 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:07.590 [Pipeline] sh 00:01:07.870 + git -C spdk log --oneline -n5 00:01:07.871 c13c99a5e test: Various fixes for Fedora40 00:01:07.871 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:07.871 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:07.871 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:07.871 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:07.889 [Pipeline] withCredentials 00:01:07.900 > git --version # timeout=10 00:01:07.913 > git --version # 'git version 2.39.2' 00:01:07.929 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.931 [Pipeline] { 00:01:07.941 [Pipeline] retry 00:01:07.943 [Pipeline] { 00:01:07.959 [Pipeline] sh 00:01:08.238 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:08.250 [Pipeline] } 00:01:08.273 [Pipeline] // retry 00:01:08.279 [Pipeline] } 00:01:08.297 [Pipeline] // withCredentials 00:01:08.309 [Pipeline] httpRequest 00:01:08.707 [Pipeline] echo 00:01:08.709 Sorcerer 10.211.164.101 is alive 00:01:08.719 [Pipeline] retry 00:01:08.721 [Pipeline] { 00:01:08.736 [Pipeline] httpRequest 00:01:08.740 HttpMethod: GET 00:01:08.741 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:08.741 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:08.743 Response Code: HTTP/1.1 200 OK 00:01:08.743 Success: Status code 200 is in the accepted range: 200,404 00:01:08.744 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.220 [Pipeline] } 00:01:12.238 [Pipeline] // retry 00:01:12.246 [Pipeline] sh 00:01:12.525 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.916 [Pipeline] sh 00:01:14.194 + git -C dpdk log --oneline -n5 00:01:14.194 eeb0605f11 version: 23.11.0 00:01:14.194 238778122a doc: update release notes for 23.11 00:01:14.194 46aa6b3cfc doc: fix description of RSS features 00:01:14.194 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:14.194 7e421ae345 devtools: support skipping forbid rule check 00:01:14.209 [Pipeline] writeFile 00:01:14.223 [Pipeline] sh 00:01:14.501 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.512 [Pipeline] sh 00:01:14.788 + cat autorun-spdk.conf 00:01:14.788 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.788 SPDK_TEST_NVMF=1 00:01:14.788 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.788 SPDK_TEST_URING=1 00:01:14.788 SPDK_TEST_USDT=1 00:01:14.788 SPDK_RUN_UBSAN=1 00:01:14.788 NET_TYPE=virt 00:01:14.788 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:14.788 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:14.788 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.793 RUN_NIGHTLY=1 00:01:14.795 [Pipeline] } 00:01:14.807 [Pipeline] // stage 00:01:14.820 [Pipeline] stage 00:01:14.822 [Pipeline] { (Run VM) 00:01:14.833 [Pipeline] sh 00:01:15.109 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.109 + echo 'Start stage prepare_nvme.sh' 00:01:15.109 Start stage prepare_nvme.sh 00:01:15.109 + [[ -n 1 ]] 00:01:15.109 + disk_prefix=ex1 00:01:15.109 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:15.109 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:15.109 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:15.109 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.109 ++ SPDK_TEST_NVMF=1 00:01:15.109 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.109 ++ SPDK_TEST_URING=1 00:01:15.109 ++ SPDK_TEST_USDT=1 00:01:15.109 ++ SPDK_RUN_UBSAN=1 00:01:15.109 ++ NET_TYPE=virt 00:01:15.109 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.109 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:15.109 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.109 ++ RUN_NIGHTLY=1 00:01:15.109 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:15.109 + nvme_files=() 00:01:15.109 + declare -A nvme_files 00:01:15.109 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.109 + nvme_files['nvme.img']=5G 00:01:15.109 + nvme_files['nvme-cmb.img']=5G 00:01:15.109 + nvme_files['nvme-multi0.img']=4G 00:01:15.109 + nvme_files['nvme-multi1.img']=4G 00:01:15.109 + nvme_files['nvme-multi2.img']=4G 00:01:15.109 + nvme_files['nvme-openstack.img']=8G 00:01:15.109 + nvme_files['nvme-zns.img']=5G 00:01:15.109 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.109 + (( SPDK_TEST_FTL == 1 )) 00:01:15.109 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.109 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.109 + for nvme in "${!nvme_files[@]}" 00:01:15.109 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:15.110 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.110 + for nvme in "${!nvme_files[@]}" 00:01:15.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:15.110 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.110 + for nvme in "${!nvme_files[@]}" 00:01:15.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:15.367 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.367 + for nvme in "${!nvme_files[@]}" 00:01:15.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:15.367 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.367 + for nvme in "${!nvme_files[@]}" 00:01:15.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:15.626 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.626 + for nvme in "${!nvme_files[@]}" 00:01:15.626 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:15.885 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.885 + for nvme in "${!nvme_files[@]}" 00:01:15.885 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:15.885 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.885 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:15.885 + echo 'End stage prepare_nvme.sh' 00:01:15.885 End stage prepare_nvme.sh 00:01:15.896 [Pipeline] sh 00:01:16.175 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.176 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:16.176 00:01:16.176 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:16.176 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:16.176 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.176 HELP=0 00:01:16.176 DRY_RUN=0 00:01:16.176 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:16.176 NVME_DISKS_TYPE=nvme,nvme, 00:01:16.176 NVME_AUTO_CREATE=0 00:01:16.176 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:16.176 NVME_CMB=,, 00:01:16.176 NVME_PMR=,, 00:01:16.176 NVME_ZNS=,, 00:01:16.176 NVME_MS=,, 00:01:16.176 NVME_FDP=,, 00:01:16.176 SPDK_VAGRANT_DISTRO=fedora39 00:01:16.176 SPDK_VAGRANT_VMCPU=10 00:01:16.176 SPDK_VAGRANT_VMRAM=12288 00:01:16.176 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.176 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.176 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.176 SPDK_OPENSTACK_NETWORK=0 00:01:16.176 VAGRANT_PACKAGE_BOX=0 00:01:16.176 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:16.176 FORCE_DISTRO=true 00:01:16.176 VAGRANT_BOX_VERSION= 00:01:16.176 EXTRA_VAGRANTFILES= 00:01:16.176 NIC_MODEL=e1000 00:01:16.176 00:01:16.176 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:16.176 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:19.462 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.721 ==> default: Creating image (snapshot of base box volume). 00:01:19.980 ==> default: Creating domain with the following settings... 00:01:19.980 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733482650_7acc9a97934f76d8bf6a 00:01:19.980 ==> default: -- Domain type: kvm 00:01:19.980 ==> default: -- Cpus: 10 00:01:19.980 ==> default: -- Feature: acpi 00:01:19.980 ==> default: -- Feature: apic 00:01:19.980 ==> default: -- Feature: pae 00:01:19.980 ==> default: -- Memory: 12288M 00:01:19.980 ==> default: -- Memory Backing: hugepages: 00:01:19.980 ==> default: -- Management MAC: 00:01:19.980 ==> default: -- Loader: 00:01:19.980 ==> default: -- Nvram: 00:01:19.980 ==> default: -- Base box: spdk/fedora39 00:01:19.980 ==> default: -- Storage pool: default 00:01:19.980 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733482650_7acc9a97934f76d8bf6a.img (20G) 00:01:19.980 ==> default: -- Volume Cache: default 00:01:19.980 ==> default: -- Kernel: 00:01:19.980 ==> default: -- Initrd: 00:01:19.980 ==> default: -- Graphics Type: vnc 00:01:19.980 ==> default: -- Graphics Port: -1 00:01:19.980 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.980 ==> default: -- Graphics Password: Not defined 00:01:19.980 ==> default: -- Video Type: cirrus 00:01:19.980 ==> default: -- Video VRAM: 9216 00:01:19.980 ==> default: -- Sound Type: 00:01:19.980 ==> default: -- Keymap: en-us 00:01:19.980 ==> default: -- TPM Path: 00:01:19.980 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.980 ==> default: -- Command line args: 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:19.980 ==> default: -> value=-drive, 00:01:19.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:19.980 ==> default: -> value=-drive, 00:01:19.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.980 ==> default: -> value=-drive, 00:01:19.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.980 ==> default: -> value=-drive, 00:01:19.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:19.980 ==> default: -> value=-device, 00:01:19.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.980 ==> default: Creating shared folders metadata... 00:01:19.980 ==> default: Starting domain. 00:01:21.359 ==> default: Waiting for domain to get an IP address... 00:01:39.436 ==> default: Waiting for SSH to become available... 00:01:39.436 ==> default: Configuring and enabling network interfaces... 00:01:41.965 default: SSH address: 192.168.121.58:22 00:01:41.965 default: SSH username: vagrant 00:01:41.965 default: SSH auth method: private key 00:01:44.490 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.046 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:56.310 ==> default: Mounting SSHFS shared folder... 00:01:58.210 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.210 ==> default: Checking Mount.. 00:01:59.146 ==> default: Folder Successfully Mounted! 00:01:59.146 ==> default: Running provisioner: file... 00:02:00.079 default: ~/.gitconfig => .gitconfig 00:02:00.349 00:02:00.349 SUCCESS! 00:02:00.349 00:02:00.349 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.349 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.349 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.349 00:02:00.371 [Pipeline] } 00:02:00.385 [Pipeline] // stage 00:02:00.393 [Pipeline] dir 00:02:00.394 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:00.395 [Pipeline] { 00:02:00.407 [Pipeline] catchError 00:02:00.408 [Pipeline] { 00:02:00.419 [Pipeline] sh 00:02:00.696 + vagrant+ ssh-config --host vagrant 00:02:00.696 sed -ne /^Host/,$p 00:02:00.696 + tee ssh_conf 00:02:04.880 Host vagrant 00:02:04.880 HostName 192.168.121.58 00:02:04.880 User vagrant 00:02:04.880 Port 22 00:02:04.880 UserKnownHostsFile /dev/null 00:02:04.880 StrictHostKeyChecking no 00:02:04.880 PasswordAuthentication no 00:02:04.880 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.880 IdentitiesOnly yes 00:02:04.880 LogLevel FATAL 00:02:04.880 ForwardAgent yes 00:02:04.880 ForwardX11 yes 00:02:04.880 00:02:04.892 [Pipeline] withEnv 00:02:04.894 [Pipeline] { 00:02:04.906 [Pipeline] sh 00:02:05.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.182 source /etc/os-release 00:02:05.182 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.182 # Minimal, systemd-like check. 00:02:05.182 if [[ -e /.dockerenv ]]; then 00:02:05.182 # Clear garbage from the node's name: 00:02:05.182 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.182 # $HOSTNAME is the actual container id 00:02:05.182 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.182 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.182 # We can assume this is a mount from a host where container is running, 00:02:05.182 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.182 container="$(< /etc/hostname) ($agent)" 00:02:05.182 else 00:02:05.182 # Fallback 00:02:05.182 container=$agent 00:02:05.182 fi 00:02:05.182 fi 00:02:05.182 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.182 00:02:05.451 [Pipeline] } 00:02:05.466 [Pipeline] // withEnv 00:02:05.473 [Pipeline] setCustomBuildProperty 00:02:05.486 [Pipeline] stage 00:02:05.488 [Pipeline] { (Tests) 00:02:05.503 [Pipeline] sh 00:02:05.781 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.055 [Pipeline] sh 00:02:06.335 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.607 [Pipeline] timeout 00:02:06.608 Timeout set to expire in 1 hr 0 min 00:02:06.610 [Pipeline] { 00:02:06.624 [Pipeline] sh 00:02:06.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:07.468 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:07.480 [Pipeline] sh 00:02:07.765 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.037 [Pipeline] sh 00:02:08.317 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.592 [Pipeline] sh 00:02:08.926 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:08.926 ++ readlink -f spdk_repo 00:02:08.926 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.926 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.926 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.926 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.926 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.926 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.926 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.926 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:08.926 + cd /home/vagrant/spdk_repo 00:02:08.926 + source /etc/os-release 00:02:08.926 ++ NAME='Fedora Linux' 00:02:08.926 ++ VERSION='39 (Cloud Edition)' 00:02:08.926 ++ ID=fedora 00:02:08.926 ++ VERSION_ID=39 00:02:08.926 ++ VERSION_CODENAME= 00:02:08.926 ++ PLATFORM_ID=platform:f39 00:02:08.926 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.926 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.926 ++ LOGO=fedora-logo-icon 00:02:08.926 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.926 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.926 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.926 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.926 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.926 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.926 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.926 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.926 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.926 ++ SUPPORT_END=2024-11-12 00:02:08.926 ++ VARIANT='Cloud Edition' 00:02:08.926 ++ VARIANT_ID=cloud 00:02:08.926 + uname -a 00:02:08.927 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.927 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.185 Hugepages 00:02:09.185 node hugesize free / total 00:02:09.185 node0 1048576kB 0 / 0 00:02:09.185 node0 2048kB 0 / 0 00:02:09.185 00:02:09.185 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.185 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.185 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.185 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:09.185 + rm -f /tmp/spdk-ld-path 00:02:09.185 + source autorun-spdk.conf 00:02:09.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.185 ++ SPDK_TEST_NVMF=1 00:02:09.185 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.185 ++ SPDK_TEST_URING=1 00:02:09.185 ++ SPDK_TEST_USDT=1 00:02:09.185 ++ SPDK_RUN_UBSAN=1 00:02:09.185 ++ NET_TYPE=virt 00:02:09.185 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:09.185 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.185 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.185 ++ RUN_NIGHTLY=1 00:02:09.185 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.185 + [[ -n '' ]] 00:02:09.185 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.185 + for M in /var/spdk/build-*-manifest.txt 00:02:09.185 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.185 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.185 + for M in /var/spdk/build-*-manifest.txt 00:02:09.185 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.185 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.185 + for M in /var/spdk/build-*-manifest.txt 00:02:09.185 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.185 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.185 ++ uname 00:02:09.185 + [[ Linux == \L\i\n\u\x ]] 00:02:09.185 + sudo dmesg -T 00:02:09.444 + sudo dmesg --clear 00:02:09.445 + dmesg_pid=5970 00:02:09.445 + sudo dmesg -Tw 00:02:09.445 + [[ Fedora Linux == FreeBSD ]] 00:02:09.445 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.445 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.445 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.445 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.445 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.445 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.445 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.445 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.445 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.445 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.445 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.445 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.445 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.445 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.445 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.445 Test configuration: 00:02:09.445 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.445 SPDK_TEST_NVMF=1 00:02:09.445 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.445 SPDK_TEST_URING=1 00:02:09.445 SPDK_TEST_USDT=1 00:02:09.445 SPDK_RUN_UBSAN=1 00:02:09.445 NET_TYPE=virt 00:02:09.445 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:09.445 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.445 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.445 RUN_NIGHTLY=1 10:58:20 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:09.445 10:58:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.445 10:58:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.445 10:58:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.445 10:58:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.445 10:58:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.445 10:58:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.445 10:58:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.445 10:58:20 -- paths/export.sh@5 -- $ export PATH 00:02:09.445 10:58:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.445 10:58:20 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.445 10:58:20 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:09.445 10:58:20 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733482700.XXXXXX 00:02:09.445 10:58:20 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733482700.uU42m9 00:02:09.445 10:58:20 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:09.445 10:58:20 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:09.445 10:58:20 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.445 10:58:20 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:09.445 10:58:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.445 10:58:20 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.445 10:58:20 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:09.445 10:58:20 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:09.445 10:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.445 10:58:20 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:09.445 10:58:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.445 10:58:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.445 10:58:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.445 10:58:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.445 Fri Dec 6 10:58:20 AM UTC 2024 00:02:09.445 10:58:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.445 LTS-67-gc13c99a5e 00:02:09.445 10:58:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.445 10:58:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.445 10:58:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.445 10:58:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:09.445 10:58:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:09.445 10:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.445 ************************************ 00:02:09.445 START TEST ubsan 00:02:09.445 ************************************ 00:02:09.445 using ubsan 00:02:09.445 10:58:20 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:09.445 00:02:09.445 real 0m0.000s 00:02:09.445 user 0m0.000s 00:02:09.445 sys 0m0.000s 00:02:09.445 10:58:20 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:09.445 10:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.445 ************************************ 00:02:09.445 END TEST ubsan 00:02:09.445 ************************************ 00:02:09.445 10:58:20 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:09.445 10:58:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:09.445 10:58:20 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:09.445 10:58:20 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:09.445 10:58:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:09.445 10:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.445 ************************************ 00:02:09.445 START TEST build_native_dpdk 00:02:09.445 ************************************ 00:02:09.445 10:58:20 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:09.445 10:58:20 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:09.445 10:58:20 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:09.445 10:58:20 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:09.445 10:58:20 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:09.445 10:58:20 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:09.445 10:58:20 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:09.445 10:58:20 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:09.445 10:58:20 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:09.704 10:58:20 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:09.704 10:58:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:09.704 10:58:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:09.704 10:58:20 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:09.704 10:58:20 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:09.704 10:58:20 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:09.704 10:58:20 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:09.704 10:58:20 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.704 10:58:20 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:09.704 10:58:20 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:09.704 10:58:20 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:09.704 10:58:20 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:09.704 eeb0605f11 version: 23.11.0 00:02:09.704 238778122a doc: update release notes for 23.11 00:02:09.704 46aa6b3cfc doc: fix description of RSS features 00:02:09.704 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:09.704 7e421ae345 devtools: support skipping forbid rule check 00:02:09.704 10:58:20 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:09.704 10:58:20 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:09.704 10:58:20 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:09.705 10:58:20 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:09.705 10:58:20 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:09.705 10:58:20 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:09.705 10:58:20 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:09.705 10:58:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:09.705 10:58:20 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:09.705 10:58:20 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:09.705 10:58:20 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:09.705 10:58:20 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:09.705 10:58:20 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:09.705 10:58:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:09.705 10:58:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:09.705 10:58:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:09.705 10:58:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:09.705 10:58:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.705 10:58:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:09.705 10:58:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:09.705 10:58:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:09.705 10:58:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:09.705 10:58:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:09.705 10:58:20 -- scripts/common.sh@343 -- $ case "$op" in 00:02:09.705 10:58:20 -- scripts/common.sh@344 -- $ : 1 00:02:09.705 10:58:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:09.705 10:58:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.705 10:58:20 -- scripts/common.sh@364 -- $ decimal 23 00:02:09.705 10:58:20 -- scripts/common.sh@352 -- $ local d=23 00:02:09.705 10:58:20 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:09.705 10:58:20 -- scripts/common.sh@354 -- $ echo 23 00:02:09.705 10:58:20 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:09.705 10:58:20 -- scripts/common.sh@365 -- $ decimal 21 00:02:09.705 10:58:20 -- scripts/common.sh@352 -- $ local d=21 00:02:09.705 10:58:20 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:09.705 10:58:20 -- scripts/common.sh@354 -- $ echo 21 00:02:09.705 10:58:20 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:09.705 10:58:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:09.705 10:58:20 -- scripts/common.sh@366 -- $ return 1 00:02:09.705 10:58:20 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:09.705 patching file config/rte_config.h 00:02:09.705 Hunk #1 succeeded at 60 (offset 1 line). 00:02:09.705 10:58:20 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:09.705 10:58:20 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:09.705 10:58:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:09.705 10:58:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:09.705 10:58:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:09.705 10:58:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:09.705 10:58:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.705 10:58:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:09.705 10:58:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:09.705 10:58:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:09.705 10:58:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:09.705 10:58:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:09.705 10:58:20 -- scripts/common.sh@343 -- $ case "$op" in 00:02:09.705 10:58:20 -- scripts/common.sh@344 -- $ : 1 00:02:09.705 10:58:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:09.705 10:58:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.705 10:58:20 -- scripts/common.sh@364 -- $ decimal 23 00:02:09.705 10:58:20 -- scripts/common.sh@352 -- $ local d=23 00:02:09.705 10:58:20 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:09.705 10:58:20 -- scripts/common.sh@354 -- $ echo 23 00:02:09.705 10:58:20 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:09.705 10:58:20 -- scripts/common.sh@365 -- $ decimal 24 00:02:09.705 10:58:20 -- scripts/common.sh@352 -- $ local d=24 00:02:09.705 10:58:20 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.705 10:58:20 -- scripts/common.sh@354 -- $ echo 24 00:02:09.705 10:58:20 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:09.705 10:58:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:09.705 10:58:20 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:09.705 10:58:20 -- scripts/common.sh@367 -- $ return 0 00:02:09.705 10:58:20 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:09.705 patching file lib/pcapng/rte_pcapng.c 00:02:09.705 10:58:20 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:09.705 10:58:20 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:09.705 10:58:20 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:09.705 10:58:20 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:09.705 10:58:20 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.975 The Meson build system 00:02:14.975 Version: 1.5.0 00:02:14.975 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:14.975 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:14.975 Build type: native build 00:02:14.975 Program cat found: YES (/usr/bin/cat) 00:02:14.975 Project name: DPDK 00:02:14.975 Project version: 23.11.0 00:02:14.975 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.975 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:14.975 Host machine cpu family: x86_64 00:02:14.975 Host machine cpu: x86_64 00:02:14.975 Message: ## Building in Developer Mode ## 00:02:14.975 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.975 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:14.975 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.975 Program python3 found: YES (/usr/bin/python3) 00:02:14.975 Program cat found: YES (/usr/bin/cat) 00:02:14.975 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.975 Compiler for C supports arguments -march=native: YES 00:02:14.975 Checking for size of "void *" : 8 00:02:14.975 Checking for size of "void *" : 8 (cached) 00:02:14.975 Library m found: YES 00:02:14.975 Library numa found: YES 00:02:14.975 Has header "numaif.h" : YES 00:02:14.975 Library fdt found: NO 00:02:14.975 Library execinfo found: NO 00:02:14.975 Has header "execinfo.h" : YES 00:02:14.975 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.975 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.975 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.975 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.975 Run-time dependency openssl found: YES 3.1.1 00:02:14.975 Run-time dependency libpcap found: YES 1.10.4 00:02:14.975 Has header "pcap.h" with dependency libpcap: YES 00:02:14.975 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.975 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.975 Compiler for C supports arguments -Wformat: YES 00:02:14.975 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.975 Compiler for C supports arguments -Wformat-security: NO 00:02:14.975 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.975 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.975 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.975 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.975 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.975 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.975 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.975 Compiler for C supports arguments -Wundef: YES 00:02:14.975 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.975 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.975 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.975 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.975 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.975 Program objdump found: YES (/usr/bin/objdump) 00:02:14.975 Compiler for C supports arguments -mavx512f: YES 00:02:14.975 Checking if "AVX512 checking" compiles: YES 00:02:14.975 Fetching value of define "__SSE4_2__" : 1 00:02:14.975 Fetching value of define "__AES__" : 1 00:02:14.975 Fetching value of define "__AVX__" : 1 00:02:14.975 Fetching value of define "__AVX2__" : 1 00:02:14.975 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.975 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.975 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.975 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.975 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.975 Fetching value of define "__PCLMUL__" : 1 00:02:14.975 Fetching value of define "__RDRND__" : 1 00:02:14.975 Fetching value of define "__RDSEED__" : 1 00:02:14.975 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.975 Fetching value of define "__znver1__" : (undefined) 00:02:14.975 Fetching value of define "__znver2__" : (undefined) 00:02:14.975 Fetching value of define "__znver3__" : (undefined) 00:02:14.975 Fetching value of define "__znver4__" : (undefined) 00:02:14.975 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.975 Message: lib/log: Defining dependency "log" 00:02:14.975 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.975 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.975 Checking for function "getentropy" : NO 00:02:14.975 Message: lib/eal: Defining dependency "eal" 00:02:14.975 Message: lib/ring: Defining dependency "ring" 00:02:14.975 Message: lib/rcu: Defining dependency "rcu" 00:02:14.975 Message: lib/mempool: Defining dependency "mempool" 00:02:14.975 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.975 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.975 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.976 Compiler for C supports arguments -mpclmul: YES 00:02:14.976 Compiler for C supports arguments -maes: YES 00:02:14.976 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.976 Compiler for C supports arguments -mavx512bw: YES 00:02:14.976 Compiler for C supports arguments -mavx512dq: YES 00:02:14.976 Compiler for C supports arguments -mavx512vl: YES 00:02:14.976 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.976 Compiler for C supports arguments -mavx2: YES 00:02:14.976 Compiler for C supports arguments -mavx: YES 00:02:14.976 Message: lib/net: Defining dependency "net" 00:02:14.976 Message: lib/meter: Defining dependency "meter" 00:02:14.976 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.976 Message: lib/pci: Defining dependency "pci" 00:02:14.976 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.976 Message: lib/metrics: Defining dependency "metrics" 00:02:14.976 Message: lib/hash: Defining dependency "hash" 00:02:14.976 Message: lib/timer: Defining dependency "timer" 00:02:14.976 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:14.976 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:14.976 Message: lib/acl: Defining dependency "acl" 00:02:14.976 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.976 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.976 Run-time dependency libelf found: YES 0.191 00:02:14.976 Message: lib/bpf: Defining dependency "bpf" 00:02:14.976 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.976 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.976 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.976 Message: lib/distributor: Defining dependency "distributor" 00:02:14.976 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.976 Message: lib/efd: Defining dependency "efd" 00:02:14.976 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.976 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:14.976 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.976 Message: lib/gro: Defining dependency "gro" 00:02:14.976 Message: lib/gso: Defining dependency "gso" 00:02:14.976 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.976 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.976 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.976 Message: lib/lpm: Defining dependency "lpm" 00:02:14.976 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.976 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.976 Message: lib/member: Defining dependency "member" 00:02:14.976 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.976 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.976 Message: lib/power: Defining dependency "power" 00:02:14.976 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.976 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.976 Message: lib/mldev: Defining dependency "mldev" 00:02:14.976 Message: lib/rib: Defining dependency "rib" 00:02:14.976 Message: lib/reorder: Defining dependency "reorder" 00:02:14.976 Message: lib/sched: Defining dependency "sched" 00:02:14.976 Message: lib/security: Defining dependency "security" 00:02:14.976 Message: lib/stack: Defining dependency "stack" 00:02:14.976 Has header "linux/userfaultfd.h" : YES 00:02:14.976 Has header "linux/vduse.h" : YES 00:02:14.976 Message: lib/vhost: Defining dependency "vhost" 00:02:14.976 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.976 Message: lib/pdcp: Defining dependency "pdcp" 00:02:14.976 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.976 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.976 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:14.976 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.976 Message: lib/fib: Defining dependency "fib" 00:02:14.976 Message: lib/port: Defining dependency "port" 00:02:14.976 Message: lib/pdump: Defining dependency "pdump" 00:02:14.976 Message: lib/table: Defining dependency "table" 00:02:14.976 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.976 Message: lib/graph: Defining dependency "graph" 00:02:14.976 Message: lib/node: Defining dependency "node" 00:02:14.976 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.879 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.879 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.879 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.879 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:16.879 Compiler for C supports arguments -Wno-unused-value: YES 00:02:16.879 Compiler for C supports arguments -Wno-format: YES 00:02:16.879 Compiler for C supports arguments -Wno-format-security: YES 00:02:16.879 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:16.879 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:16.879 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:16.879 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:16.879 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.879 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.879 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:16.879 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:16.879 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:16.879 Has header "sys/epoll.h" : YES 00:02:16.879 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:16.879 Configuring doxy-api-html.conf using configuration 00:02:16.879 Configuring doxy-api-man.conf using configuration 00:02:16.879 Program mandb found: YES (/usr/bin/mandb) 00:02:16.879 Program sphinx-build found: NO 00:02:16.879 Configuring rte_build_config.h using configuration 00:02:16.879 Message: 00:02:16.879 ================= 00:02:16.879 Applications Enabled 00:02:16.879 ================= 00:02:16.879 00:02:16.879 apps: 00:02:16.879 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:16.879 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:16.879 test-pmd, test-regex, test-sad, test-security-perf, 00:02:16.879 00:02:16.879 Message: 00:02:16.879 ================= 00:02:16.879 Libraries Enabled 00:02:16.879 ================= 00:02:16.879 00:02:16.879 libs: 00:02:16.879 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:16.879 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:16.879 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:16.879 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:16.879 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:16.879 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:16.879 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:16.879 00:02:16.879 00:02:16.879 Message: 00:02:16.879 =============== 00:02:16.879 Drivers Enabled 00:02:16.879 =============== 00:02:16.879 00:02:16.879 common: 00:02:16.879 00:02:16.879 bus: 00:02:16.879 pci, vdev, 00:02:16.879 mempool: 00:02:16.879 ring, 00:02:16.879 dma: 00:02:16.879 00:02:16.879 net: 00:02:16.879 i40e, 00:02:16.879 raw: 00:02:16.879 00:02:16.879 crypto: 00:02:16.879 00:02:16.879 compress: 00:02:16.879 00:02:16.880 regex: 00:02:16.880 00:02:16.880 ml: 00:02:16.880 00:02:16.880 vdpa: 00:02:16.880 00:02:16.880 event: 00:02:16.880 00:02:16.880 baseband: 00:02:16.880 00:02:16.880 gpu: 00:02:16.880 00:02:16.880 00:02:16.880 Message: 00:02:16.880 ================= 00:02:16.880 Content Skipped 00:02:16.880 ================= 00:02:16.880 00:02:16.880 apps: 00:02:16.880 00:02:16.880 libs: 00:02:16.880 00:02:16.880 drivers: 00:02:16.880 common/cpt: not in enabled drivers build config 00:02:16.880 common/dpaax: not in enabled drivers build config 00:02:16.880 common/iavf: not in enabled drivers build config 00:02:16.880 common/idpf: not in enabled drivers build config 00:02:16.880 common/mvep: not in enabled drivers build config 00:02:16.880 common/octeontx: not in enabled drivers build config 00:02:16.880 bus/auxiliary: not in enabled drivers build config 00:02:16.880 bus/cdx: not in enabled drivers build config 00:02:16.880 bus/dpaa: not in enabled drivers build config 00:02:16.880 bus/fslmc: not in enabled drivers build config 00:02:16.880 bus/ifpga: not in enabled drivers build config 00:02:16.880 bus/platform: not in enabled drivers build config 00:02:16.880 bus/vmbus: not in enabled drivers build config 00:02:16.880 common/cnxk: not in enabled drivers build config 00:02:16.880 common/mlx5: not in enabled drivers build config 00:02:16.880 common/nfp: not in enabled drivers build config 00:02:16.880 common/qat: not in enabled drivers build config 00:02:16.880 common/sfc_efx: not in enabled drivers build config 00:02:16.880 mempool/bucket: not in enabled drivers build config 00:02:16.880 mempool/cnxk: not in enabled drivers build config 00:02:16.880 mempool/dpaa: not in enabled drivers build config 00:02:16.880 mempool/dpaa2: not in enabled drivers build config 00:02:16.880 mempool/octeontx: not in enabled drivers build config 00:02:16.880 mempool/stack: not in enabled drivers build config 00:02:16.880 dma/cnxk: not in enabled drivers build config 00:02:16.880 dma/dpaa: not in enabled drivers build config 00:02:16.880 dma/dpaa2: not in enabled drivers build config 00:02:16.880 dma/hisilicon: not in enabled drivers build config 00:02:16.880 dma/idxd: not in enabled drivers build config 00:02:16.880 dma/ioat: not in enabled drivers build config 00:02:16.880 dma/skeleton: not in enabled drivers build config 00:02:16.880 net/af_packet: not in enabled drivers build config 00:02:16.880 net/af_xdp: not in enabled drivers build config 00:02:16.880 net/ark: not in enabled drivers build config 00:02:16.880 net/atlantic: not in enabled drivers build config 00:02:16.880 net/avp: not in enabled drivers build config 00:02:16.880 net/axgbe: not in enabled drivers build config 00:02:16.880 net/bnx2x: not in enabled drivers build config 00:02:16.880 net/bnxt: not in enabled drivers build config 00:02:16.880 net/bonding: not in enabled drivers build config 00:02:16.880 net/cnxk: not in enabled drivers build config 00:02:16.880 net/cpfl: not in enabled drivers build config 00:02:16.880 net/cxgbe: not in enabled drivers build config 00:02:16.880 net/dpaa: not in enabled drivers build config 00:02:16.880 net/dpaa2: not in enabled drivers build config 00:02:16.880 net/e1000: not in enabled drivers build config 00:02:16.880 net/ena: not in enabled drivers build config 00:02:16.880 net/enetc: not in enabled drivers build config 00:02:16.880 net/enetfec: not in enabled drivers build config 00:02:16.880 net/enic: not in enabled drivers build config 00:02:16.880 net/failsafe: not in enabled drivers build config 00:02:16.880 net/fm10k: not in enabled drivers build config 00:02:16.880 net/gve: not in enabled drivers build config 00:02:16.880 net/hinic: not in enabled drivers build config 00:02:16.880 net/hns3: not in enabled drivers build config 00:02:16.880 net/iavf: not in enabled drivers build config 00:02:16.880 net/ice: not in enabled drivers build config 00:02:16.880 net/idpf: not in enabled drivers build config 00:02:16.880 net/igc: not in enabled drivers build config 00:02:16.880 net/ionic: not in enabled drivers build config 00:02:16.880 net/ipn3ke: not in enabled drivers build config 00:02:16.880 net/ixgbe: not in enabled drivers build config 00:02:16.880 net/mana: not in enabled drivers build config 00:02:16.880 net/memif: not in enabled drivers build config 00:02:16.880 net/mlx4: not in enabled drivers build config 00:02:16.880 net/mlx5: not in enabled drivers build config 00:02:16.880 net/mvneta: not in enabled drivers build config 00:02:16.880 net/mvpp2: not in enabled drivers build config 00:02:16.880 net/netvsc: not in enabled drivers build config 00:02:16.880 net/nfb: not in enabled drivers build config 00:02:16.880 net/nfp: not in enabled drivers build config 00:02:16.880 net/ngbe: not in enabled drivers build config 00:02:16.880 net/null: not in enabled drivers build config 00:02:16.880 net/octeontx: not in enabled drivers build config 00:02:16.880 net/octeon_ep: not in enabled drivers build config 00:02:16.880 net/pcap: not in enabled drivers build config 00:02:16.880 net/pfe: not in enabled drivers build config 00:02:16.880 net/qede: not in enabled drivers build config 00:02:16.880 net/ring: not in enabled drivers build config 00:02:16.880 net/sfc: not in enabled drivers build config 00:02:16.880 net/softnic: not in enabled drivers build config 00:02:16.880 net/tap: not in enabled drivers build config 00:02:16.880 net/thunderx: not in enabled drivers build config 00:02:16.880 net/txgbe: not in enabled drivers build config 00:02:16.880 net/vdev_netvsc: not in enabled drivers build config 00:02:16.880 net/vhost: not in enabled drivers build config 00:02:16.880 net/virtio: not in enabled drivers build config 00:02:16.880 net/vmxnet3: not in enabled drivers build config 00:02:16.880 raw/cnxk_bphy: not in enabled drivers build config 00:02:16.880 raw/cnxk_gpio: not in enabled drivers build config 00:02:16.880 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:16.880 raw/ifpga: not in enabled drivers build config 00:02:16.880 raw/ntb: not in enabled drivers build config 00:02:16.880 raw/skeleton: not in enabled drivers build config 00:02:16.880 crypto/armv8: not in enabled drivers build config 00:02:16.880 crypto/bcmfs: not in enabled drivers build config 00:02:16.880 crypto/caam_jr: not in enabled drivers build config 00:02:16.880 crypto/ccp: not in enabled drivers build config 00:02:16.880 crypto/cnxk: not in enabled drivers build config 00:02:16.880 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.880 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.880 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.880 crypto/mlx5: not in enabled drivers build config 00:02:16.880 crypto/mvsam: not in enabled drivers build config 00:02:16.880 crypto/nitrox: not in enabled drivers build config 00:02:16.880 crypto/null: not in enabled drivers build config 00:02:16.880 crypto/octeontx: not in enabled drivers build config 00:02:16.880 crypto/openssl: not in enabled drivers build config 00:02:16.880 crypto/scheduler: not in enabled drivers build config 00:02:16.880 crypto/uadk: not in enabled drivers build config 00:02:16.880 crypto/virtio: not in enabled drivers build config 00:02:16.880 compress/isal: not in enabled drivers build config 00:02:16.880 compress/mlx5: not in enabled drivers build config 00:02:16.880 compress/octeontx: not in enabled drivers build config 00:02:16.880 compress/zlib: not in enabled drivers build config 00:02:16.880 regex/mlx5: not in enabled drivers build config 00:02:16.880 regex/cn9k: not in enabled drivers build config 00:02:16.880 ml/cnxk: not in enabled drivers build config 00:02:16.880 vdpa/ifc: not in enabled drivers build config 00:02:16.880 vdpa/mlx5: not in enabled drivers build config 00:02:16.880 vdpa/nfp: not in enabled drivers build config 00:02:16.880 vdpa/sfc: not in enabled drivers build config 00:02:16.880 event/cnxk: not in enabled drivers build config 00:02:16.880 event/dlb2: not in enabled drivers build config 00:02:16.880 event/dpaa: not in enabled drivers build config 00:02:16.880 event/dpaa2: not in enabled drivers build config 00:02:16.880 event/dsw: not in enabled drivers build config 00:02:16.880 event/opdl: not in enabled drivers build config 00:02:16.880 event/skeleton: not in enabled drivers build config 00:02:16.880 event/sw: not in enabled drivers build config 00:02:16.880 event/octeontx: not in enabled drivers build config 00:02:16.880 baseband/acc: not in enabled drivers build config 00:02:16.880 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:16.880 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:16.880 baseband/la12xx: not in enabled drivers build config 00:02:16.880 baseband/null: not in enabled drivers build config 00:02:16.880 baseband/turbo_sw: not in enabled drivers build config 00:02:16.880 gpu/cuda: not in enabled drivers build config 00:02:16.880 00:02:16.880 00:02:16.880 Build targets in project: 220 00:02:16.880 00:02:16.880 DPDK 23.11.0 00:02:16.880 00:02:16.880 User defined options 00:02:16.880 libdir : lib 00:02:16.880 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:16.880 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:16.880 c_link_args : 00:02:16.880 enable_docs : false 00:02:16.880 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:16.880 enable_kmods : false 00:02:16.880 machine : native 00:02:16.880 tests : false 00:02:16.880 00:02:16.880 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.880 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:16.880 10:58:27 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:16.880 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.138 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.138 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.138 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.138 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.138 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.138 [6/710] Linking static target lib/librte_kvargs.a 00:02:17.138 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.138 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.138 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.138 [10/710] Linking static target lib/librte_log.a 00:02:17.396 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.655 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.655 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.655 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.655 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.655 [16/710] Linking target lib/librte_log.so.24.0 00:02:17.655 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.655 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.914 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.914 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.914 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.172 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.172 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:18.172 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:18.172 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:18.172 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.430 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.430 [28/710] Linking static target lib/librte_telemetry.a 00:02:18.430 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.430 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.430 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.688 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.688 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.688 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.688 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:18.688 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.688 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.688 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.688 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.946 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:18.946 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.946 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.946 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.946 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.204 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.204 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.204 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.462 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.462 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.462 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.462 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.720 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.720 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.720 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.720 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.720 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.720 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.978 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.978 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.978 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.978 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.978 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.235 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.235 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.235 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.235 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.235 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.235 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.493 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.750 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.750 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.750 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.750 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.750 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.750 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.750 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.750 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.008 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.008 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.267 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.267 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.267 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.267 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.526 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.526 [85/710] Linking static target lib/librte_ring.a 00:02:21.526 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.526 [87/710] Linking static target lib/librte_eal.a 00:02:21.526 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.784 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.784 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.784 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.042 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.042 [93/710] Linking static target lib/librte_mempool.a 00:02:22.042 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.042 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.042 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.042 [97/710] Linking static target lib/librte_rcu.a 00:02:22.300 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.300 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.300 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.559 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.559 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.559 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.559 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.559 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.816 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.816 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.816 [108/710] Linking static target lib/librte_mbuf.a 00:02:23.075 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.075 [110/710] Linking static target lib/librte_net.a 00:02:23.075 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.075 [112/710] Linking static target lib/librte_meter.a 00:02:23.075 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.333 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.333 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.333 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.333 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.899 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.157 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.157 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.414 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.414 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.414 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.414 [126/710] Linking static target lib/librte_pci.a 00:02:24.414 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.414 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.672 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.672 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.672 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.672 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.672 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.929 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.929 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.929 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.929 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.929 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.929 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.929 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.199 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.199 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.199 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.199 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.199 [145/710] Linking static target lib/librte_cmdline.a 00:02:25.469 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:25.469 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:25.469 [148/710] Linking static target lib/librte_metrics.a 00:02:25.727 [149/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.727 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.984 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.242 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.242 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.242 [154/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.242 [155/710] Linking static target lib/librte_timer.a 00:02:26.501 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.760 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:26.760 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:27.019 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:27.019 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:27.587 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:27.587 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:27.587 [163/710] Linking static target lib/librte_bitratestats.a 00:02:27.587 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:27.846 [165/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.846 [166/710] Linking static target lib/librte_ethdev.a 00:02:27.846 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.846 [168/710] Linking target lib/librte_eal.so.24.0 00:02:27.846 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:27.846 [170/710] Linking static target lib/librte_bbdev.a 00:02:27.846 [171/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.846 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.105 [173/710] Linking static target lib/librte_hash.a 00:02:28.105 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:28.105 [175/710] Linking target lib/librte_ring.so.24.0 00:02:28.105 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:28.105 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:28.379 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:28.379 [179/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:28.379 [180/710] Linking target lib/librte_mempool.so.24.0 00:02:28.379 [181/710] Linking target lib/librte_meter.so.24.0 00:02:28.379 [182/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:28.379 [183/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:28.379 [184/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.638 [185/710] Linking target lib/librte_pci.so.24.0 00:02:28.638 [186/710] Linking target lib/librte_mbuf.so.24.0 00:02:28.638 [187/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:28.638 [188/710] Linking target lib/librte_timer.so.24.0 00:02:28.638 [189/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.638 [190/710] Linking static target lib/acl/libavx2_tmp.a 00:02:28.638 [191/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:28.638 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:28.638 [193/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:28.638 [194/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:28.638 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:28.638 [196/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:28.638 [197/710] Linking static target lib/acl/libavx512_tmp.a 00:02:28.638 [198/710] Linking target lib/librte_net.so.24.0 00:02:28.638 [199/710] Linking target lib/librte_bbdev.so.24.0 00:02:28.897 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:28.897 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:28.897 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:28.897 [203/710] Linking static target lib/librte_acl.a 00:02:28.897 [204/710] Linking target lib/librte_hash.so.24.0 00:02:28.897 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:28.897 [206/710] Linking static target lib/librte_cfgfile.a 00:02:29.156 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:29.156 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:29.156 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.156 [210/710] Linking target lib/librte_acl.so.24.0 00:02:29.415 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:29.415 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.415 [213/710] Linking target lib/librte_cfgfile.so.24.0 00:02:29.415 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:29.415 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:29.674 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:29.674 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.674 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:29.933 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.933 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:29.933 [221/710] Linking static target lib/librte_bpf.a 00:02:30.191 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.191 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.191 [224/710] Linking static target lib/librte_compressdev.a 00:02:30.191 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.191 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.449 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:30.449 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:30.707 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.707 [230/710] Linking target lib/librte_compressdev.so.24.0 00:02:30.707 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.707 [232/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:30.707 [233/710] Linking static target lib/librte_distributor.a 00:02:30.966 [234/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:30.966 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.966 [236/710] Linking static target lib/librte_dmadev.a 00:02:30.966 [237/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.966 [238/710] Linking target lib/librte_distributor.so.24.0 00:02:31.226 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.226 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:31.486 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:31.486 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.744 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:31.744 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:31.744 [245/710] Linking static target lib/librte_efd.a 00:02:31.744 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:32.002 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.002 [248/710] Linking static target lib/librte_cryptodev.a 00:02:32.002 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.002 [250/710] Linking target lib/librte_efd.so.24.0 00:02:32.002 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.568 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:32.568 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:32.568 [254/710] Linking static target lib/librte_dispatcher.a 00:02:32.568 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.568 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:32.827 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:32.827 [258/710] Linking static target lib/librte_gpudev.a 00:02:32.827 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:32.827 [260/710] Linking target lib/librte_metrics.so.24.0 00:02:32.827 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:33.087 [262/710] Linking target lib/librte_bpf.so.24.0 00:02:33.087 [263/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:33.087 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:33.087 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.087 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:33.087 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:02:33.087 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:33.346 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:33.346 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.346 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:33.346 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:33.605 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:33.606 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.606 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:33.606 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.865 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:33.865 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:33.865 [279/710] Linking static target lib/librte_eventdev.a 00:02:33.865 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:33.865 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:33.865 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:33.865 [283/710] Linking static target lib/librte_gro.a 00:02:33.865 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:34.124 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:34.124 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.124 [287/710] Linking target lib/librte_gro.so.24.0 00:02:34.124 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:34.382 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:34.382 [290/710] Linking static target lib/librte_gso.a 00:02:34.382 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.640 [292/710] Linking target lib/librte_gso.so.24.0 00:02:34.640 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:34.640 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:34.640 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:34.640 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:34.640 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:34.898 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:34.898 [299/710] Linking static target lib/librte_jobstats.a 00:02:34.898 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:34.898 [301/710] Linking static target lib/librte_ip_frag.a 00:02:34.898 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:34.898 [303/710] Linking static target lib/librte_latencystats.a 00:02:35.156 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.156 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:35.156 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.156 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:02:35.156 [308/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.414 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:35.414 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:35.414 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:35.414 [312/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:35.414 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:35.414 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.414 [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:35.414 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.672 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.930 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.930 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:35.930 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:35.930 [321/710] Linking static target lib/librte_lpm.a 00:02:35.930 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:35.930 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:36.188 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:36.188 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.188 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.188 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:36.188 [328/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.188 [329/710] Linking static target lib/librte_pcapng.a 00:02:36.188 [330/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.445 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.445 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:36.445 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:36.445 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:36.445 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.702 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:36.702 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.703 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:36.703 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.960 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.960 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:36.960 [342/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:36.960 [343/710] Linking static target lib/librte_member.a 00:02:36.960 [344/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.960 [345/710] Linking static target lib/librte_power.a 00:02:37.218 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:37.218 [347/710] Linking static target lib/librte_regexdev.a 00:02:37.218 [348/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:37.218 [349/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:37.218 [350/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:37.218 [351/710] Linking static target lib/librte_rawdev.a 00:02:37.476 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.476 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:37.476 [354/710] Linking target lib/librte_member.so.24.0 00:02:37.476 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:37.476 [356/710] Linking static target lib/librte_mldev.a 00:02:37.476 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:37.734 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.734 [359/710] Linking target lib/librte_power.so.24.0 00:02:37.735 [360/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:37.735 [361/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.735 [362/710] Linking target lib/librte_rawdev.so.24.0 00:02:37.735 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.735 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:37.991 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:37.991 [366/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.248 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.248 [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.248 [369/710] Linking static target lib/librte_reorder.a 00:02:38.248 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:38.248 [371/710] Linking static target lib/librte_rib.a 00:02:38.248 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.248 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.505 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.505 [375/710] Linking static target lib/librte_stack.a 00:02:38.505 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.505 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:38.763 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:38.763 [379/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.763 [380/710] Linking static target lib/librte_security.a 00:02:38.763 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.763 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:38.763 [383/710] Linking target lib/librte_stack.so.24.0 00:02:38.763 [384/710] Linking target lib/librte_rib.so.24.0 00:02:38.763 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.763 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:38.763 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:39.022 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.022 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.022 [390/710] Linking target lib/librte_security.so.24.0 00:02:39.022 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.280 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.280 [393/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:39.280 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:39.280 [395/710] Linking static target lib/librte_sched.a 00:02:39.538 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.538 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:39.796 [398/710] Linking target lib/librte_sched.so.24.0 00:02:39.796 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.796 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:40.055 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.055 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:40.314 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:40.314 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.572 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:40.572 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:40.831 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:40.831 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:41.089 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:41.089 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:41.089 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:41.089 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:41.089 [413/710] Linking static target lib/librte_ipsec.a 00:02:41.348 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:41.348 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:41.348 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.606 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:41.606 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:41.606 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:41.606 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:41.606 [421/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:41.606 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:41.606 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.536 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:42.536 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:42.536 [426/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:42.536 [427/710] Linking static target lib/librte_fib.a 00:02:42.536 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:42.536 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:42.536 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:42.795 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:42.795 [432/710] Linking static target lib/librte_pdcp.a 00:02:42.795 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.053 [434/710] Linking target lib/librte_fib.so.24.0 00:02:43.053 [435/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:43.053 [436/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.053 [437/710] Linking target lib/librte_pdcp.so.24.0 00:02:43.618 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.618 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.618 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.618 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.895 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:43.896 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:43.896 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:44.209 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:44.209 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:44.209 [447/710] Linking static target lib/librte_port.a 00:02:44.466 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:44.466 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:44.725 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:44.725 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:44.725 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.725 [453/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.725 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.725 [455/710] Linking target lib/librte_port.so.24.0 00:02:44.725 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.983 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.983 [458/710] Linking static target lib/librte_pdump.a 00:02:44.983 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:45.241 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.241 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:45.241 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:45.499 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:45.757 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:45.757 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:45.757 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:45.757 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:46.016 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:46.016 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:46.274 [470/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:46.274 [471/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:46.274 [472/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:46.274 [473/710] Linking static target lib/librte_table.a 00:02:46.839 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:46.839 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.839 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:47.096 [477/710] Linking target lib/librte_table.so.24.0 00:02:47.096 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:47.096 [479/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:47.355 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:47.613 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:47.613 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:47.871 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:47.871 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:47.871 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:47.871 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:48.436 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:48.436 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:48.436 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:48.436 [490/710] Linking static target lib/librte_graph.a 00:02:48.694 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:48.694 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:48.694 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:49.260 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.260 [495/710] Linking target lib/librte_graph.so.24.0 00:02:49.260 [496/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:49.260 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:49.260 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:49.260 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:49.826 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:49.826 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.826 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:49.826 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:50.084 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:50.084 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.084 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:50.342 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:50.342 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:50.599 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.599 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.599 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.857 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.857 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:50.857 [514/710] Linking static target lib/librte_node.a 00:02:50.857 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.114 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.114 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.114 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.114 [519/710] Linking target lib/librte_node.so.24.0 00:02:51.114 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.371 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.372 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.372 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.372 [524/710] Linking static target drivers/librte_bus_vdev.a 00:02:51.372 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.372 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.629 [527/710] Linking static target drivers/librte_bus_pci.a 00:02:51.629 [528/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:51.888 [529/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.888 [530/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.888 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.888 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:51.888 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:51.888 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:51.888 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:51.888 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.146 [537/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.146 [538/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.146 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:52.146 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:52.146 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.146 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.146 [543/710] Linking static target drivers/librte_mempool_ring.a 00:02:52.146 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.404 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:52.404 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:52.663 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:52.920 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:53.179 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:53.179 [550/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:53.179 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:54.112 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:54.112 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:54.112 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:54.112 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:54.112 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:54.112 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:54.675 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:54.675 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:54.932 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:54.932 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:54.932 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:55.498 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:55.755 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:55.755 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:55.755 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:56.012 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:56.269 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:56.269 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:56.526 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:56.526 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:56.526 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:56.526 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:56.526 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.526 [575/710] Linking static target lib/librte_vhost.a 00:02:56.784 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:57.043 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:57.043 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:57.043 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:57.301 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:57.301 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:57.301 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:57.559 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:57.818 [584/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.818 [585/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:57.818 [586/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:57.818 [587/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:57.818 [588/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:57.818 [589/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:57.818 [590/710] Linking target lib/librte_vhost.so.24.0 00:02:57.818 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:57.818 [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:57.818 [593/710] Linking static target drivers/librte_net_i40e.a 00:02:57.818 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.385 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:58.385 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.385 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:58.670 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:58.670 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:58.670 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:58.929 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:59.187 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:59.187 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:59.187 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:59.187 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:59.444 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.444 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:59.702 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:59.961 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:59.961 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:59.961 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:00.219 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:00.219 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:00.219 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.219 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:00.219 [616/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:00.477 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:00.735 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:00.735 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:00.735 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:00.994 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:00.994 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:01.252 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:01.820 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:01.820 [625/710] Linking static target lib/librte_pipeline.a 00:03:02.076 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:02.076 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:02.076 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:02.334 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:02.334 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:02.334 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:02.591 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:02.591 [633/710] Linking target app/dpdk-graph 00:03:02.591 [634/710] Linking target app/dpdk-dumpcap 00:03:02.591 [635/710] Linking target app/dpdk-pdump 00:03:02.591 [636/710] Linking target app/dpdk-proc-info 00:03:02.848 [637/710] Linking target app/dpdk-test-acl 00:03:02.848 [638/710] Linking target app/dpdk-test-cmdline 00:03:03.107 [639/710] Linking target app/dpdk-test-compress-perf 00:03:03.107 [640/710] Linking target app/dpdk-test-dma-perf 00:03:03.107 [641/710] Linking target app/dpdk-test-crypto-perf 00:03:03.107 [642/710] Linking target app/dpdk-test-fib 00:03:03.107 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:03.364 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:03.364 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.622 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:03.622 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:03.622 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:03.879 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:04.137 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:04.137 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:04.137 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.137 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:04.137 [654/710] Linking target app/dpdk-test-gpudev 00:03:04.394 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:04.394 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:04.651 [657/710] Linking target app/dpdk-test-eventdev 00:03:04.651 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:04.651 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:04.651 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:04.909 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.909 [662/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.909 [663/710] Linking target app/dpdk-test-flow-perf 00:03:04.909 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:04.909 [665/710] Linking target lib/librte_pipeline.so.24.0 00:03:04.909 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:05.166 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:05.166 [668/710] Linking target app/dpdk-test-bbdev 00:03:05.424 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:05.424 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:05.424 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:05.424 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:05.681 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:05.681 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:05.939 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:05.939 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:05.939 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:06.198 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:06.456 [679/710] Linking target app/dpdk-test-mldev 00:03:06.456 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:06.456 [681/710] Linking target app/dpdk-test-pipeline 00:03:06.456 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:06.713 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.279 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.279 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.279 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.279 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.279 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.537 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.795 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:07.795 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.053 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:08.053 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.312 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.570 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.828 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.085 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.085 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:09.085 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:09.085 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.085 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.342 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.342 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.342 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.600 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:09.600 [706/710] Linking target app/dpdk-test-regex 00:03:09.600 [707/710] Linking target app/dpdk-test-sad 00:03:10.168 [708/710] Linking target app/dpdk-testpmd 00:03:10.168 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.426 [710/710] Linking target app/dpdk-test-security-perf 00:03:10.426 10:59:21 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.684 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.684 [0/1] Installing files. 00:03:10.946 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.946 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.949 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.950 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.951 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.951 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.211 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.212 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.474 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.474 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.474 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.474 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.474 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.475 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.476 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.477 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.477 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:11.477 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:11.477 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:11.477 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.477 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:11.477 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.477 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:11.477 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.477 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:11.477 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.477 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:11.477 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.477 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:11.477 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.477 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:11.477 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.477 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:11.477 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.477 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:11.477 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.477 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:11.477 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.477 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:11.477 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.477 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:11.477 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.478 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:11.478 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.478 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:11.478 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.478 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:11.478 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.478 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:11.478 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.478 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:11.478 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.478 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:11.478 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.478 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:11.478 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.478 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:11.478 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.478 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:11.478 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.478 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:11.478 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.478 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:11.478 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.478 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:11.478 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.478 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:11.478 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.478 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:11.478 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.478 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:11.478 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:11.478 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:11.478 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.478 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:11.478 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.478 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:11.478 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.478 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:11.478 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.478 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:11.478 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.478 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:11.478 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.478 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:11.478 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.478 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:11.478 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.478 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:11.478 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.478 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:11.478 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.478 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:11.478 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.478 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:11.478 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.478 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:11.478 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:11.478 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:11.478 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.478 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:11.478 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.478 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:11.478 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.478 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:11.478 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.478 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:11.478 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:11.478 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:11.478 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:11.478 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:11.478 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:11.478 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:11.478 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:11.478 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:11.478 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:11.478 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:11.478 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:11.478 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:11.478 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.478 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:11.478 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.478 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:11.478 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.478 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:11.478 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:11.478 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:11.478 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.478 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:11.478 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.478 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:11.478 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.478 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:11.478 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.478 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:11.478 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.478 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:11.478 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.478 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:11.478 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.478 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:11.478 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:11.478 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:11.478 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:11.478 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:11.478 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:11.479 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:11.479 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:11.479 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:11.479 10:59:22 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:11.479 10:59:22 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.479 10:59:22 -- common/autobuild_common.sh@203 -- $ cat 00:03:11.479 10:59:22 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.479 00:03:11.479 real 1m2.022s 00:03:11.479 user 7m38.485s 00:03:11.479 sys 1m4.701s 00:03:11.479 10:59:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:11.479 10:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.479 ************************************ 00:03:11.479 END TEST build_native_dpdk 00:03:11.479 ************************************ 00:03:11.736 10:59:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.736 10:59:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.736 10:59:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.736 10:59:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.736 10:59:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.736 10:59:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.737 10:59:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.737 10:59:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:11.737 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.995 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.995 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.995 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.254 Using 'verbs' RDMA provider 00:03:25.407 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:40.307 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:40.307 Creating mk/config.mk...done. 00:03:40.307 Creating mk/cc.flags.mk...done. 00:03:40.307 Type 'make' to build. 00:03:40.307 10:59:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.307 10:59:49 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:40.307 10:59:49 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:40.307 10:59:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.307 ************************************ 00:03:40.307 START TEST make 00:03:40.307 ************************************ 00:03:40.307 10:59:49 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:40.307 make[1]: Nothing to be done for 'all'. 00:04:02.269 CC lib/ut_mock/mock.o 00:04:02.269 CC lib/log/log.o 00:04:02.269 CC lib/log/log_deprecated.o 00:04:02.269 CC lib/log/log_flags.o 00:04:02.269 CC lib/ut/ut.o 00:04:02.269 LIB libspdk_ut_mock.a 00:04:02.269 LIB libspdk_log.a 00:04:02.269 LIB libspdk_ut.a 00:04:02.269 SO libspdk_ut_mock.so.5.0 00:04:02.269 SO libspdk_ut.so.1.0 00:04:02.269 SO libspdk_log.so.6.1 00:04:02.269 SYMLINK libspdk_ut_mock.so 00:04:02.269 SYMLINK libspdk_ut.so 00:04:02.269 SYMLINK libspdk_log.so 00:04:02.269 CC lib/dma/dma.o 00:04:02.269 CC lib/ioat/ioat.o 00:04:02.269 CC lib/util/base64.o 00:04:02.269 CC lib/util/bit_array.o 00:04:02.269 CC lib/util/cpuset.o 00:04:02.269 CXX lib/trace_parser/trace.o 00:04:02.269 CC lib/util/crc16.o 00:04:02.269 CC lib/util/crc32.o 00:04:02.269 CC lib/util/crc32c.o 00:04:02.269 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.269 CC lib/vfio_user/host/vfio_user.o 00:04:02.269 CC lib/util/crc32_ieee.o 00:04:02.269 CC lib/util/crc64.o 00:04:02.269 LIB libspdk_dma.a 00:04:02.269 CC lib/util/dif.o 00:04:02.269 SO libspdk_dma.so.3.0 00:04:02.269 CC lib/util/fd.o 00:04:02.269 CC lib/util/file.o 00:04:02.269 SYMLINK libspdk_dma.so 00:04:02.269 CC lib/util/hexlify.o 00:04:02.269 CC lib/util/iov.o 00:04:02.269 LIB libspdk_ioat.a 00:04:02.269 CC lib/util/math.o 00:04:02.269 SO libspdk_ioat.so.6.0 00:04:02.269 LIB libspdk_vfio_user.a 00:04:02.269 CC lib/util/pipe.o 00:04:02.269 SYMLINK libspdk_ioat.so 00:04:02.269 CC lib/util/strerror_tls.o 00:04:02.269 SO libspdk_vfio_user.so.4.0 00:04:02.269 CC lib/util/string.o 00:04:02.269 CC lib/util/uuid.o 00:04:02.269 CC lib/util/fd_group.o 00:04:02.269 SYMLINK libspdk_vfio_user.so 00:04:02.269 CC lib/util/xor.o 00:04:02.269 CC lib/util/zipf.o 00:04:02.527 LIB libspdk_util.a 00:04:02.786 SO libspdk_util.so.8.0 00:04:02.786 SYMLINK libspdk_util.so 00:04:02.786 CC lib/rdma/common.o 00:04:02.786 CC lib/rdma/rdma_verbs.o 00:04:03.045 CC lib/env_dpdk/env.o 00:04:03.045 CC lib/env_dpdk/memory.o 00:04:03.045 CC lib/env_dpdk/pci.o 00:04:03.045 CC lib/vmd/vmd.o 00:04:03.045 CC lib/idxd/idxd.o 00:04:03.045 CC lib/conf/conf.o 00:04:03.045 CC lib/json/json_parse.o 00:04:03.045 LIB libspdk_trace_parser.a 00:04:03.045 SO libspdk_trace_parser.so.4.0 00:04:03.045 SYMLINK libspdk_trace_parser.so 00:04:03.045 CC lib/vmd/led.o 00:04:03.045 CC lib/idxd/idxd_user.o 00:04:03.045 LIB libspdk_conf.a 00:04:03.305 SO libspdk_conf.so.5.0 00:04:03.305 CC lib/json/json_util.o 00:04:03.305 LIB libspdk_rdma.a 00:04:03.305 SO libspdk_rdma.so.5.0 00:04:03.305 SYMLINK libspdk_conf.so 00:04:03.305 CC lib/json/json_write.o 00:04:03.305 CC lib/env_dpdk/init.o 00:04:03.305 SYMLINK libspdk_rdma.so 00:04:03.305 CC lib/env_dpdk/threads.o 00:04:03.305 CC lib/env_dpdk/pci_ioat.o 00:04:03.305 CC lib/idxd/idxd_kernel.o 00:04:03.305 CC lib/env_dpdk/pci_virtio.o 00:04:03.564 CC lib/env_dpdk/pci_vmd.o 00:04:03.564 CC lib/env_dpdk/pci_idxd.o 00:04:03.564 CC lib/env_dpdk/pci_event.o 00:04:03.564 LIB libspdk_idxd.a 00:04:03.564 LIB libspdk_json.a 00:04:03.564 SO libspdk_idxd.so.11.0 00:04:03.564 CC lib/env_dpdk/sigbus_handler.o 00:04:03.564 CC lib/env_dpdk/pci_dpdk.o 00:04:03.564 SO libspdk_json.so.5.1 00:04:03.564 LIB libspdk_vmd.a 00:04:03.564 SO libspdk_vmd.so.5.0 00:04:03.564 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:03.564 SYMLINK libspdk_idxd.so 00:04:03.564 SYMLINK libspdk_json.so 00:04:03.564 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.564 SYMLINK libspdk_vmd.so 00:04:03.823 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.823 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.823 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.823 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:04.085 LIB libspdk_jsonrpc.a 00:04:04.085 SO libspdk_jsonrpc.so.5.1 00:04:04.085 SYMLINK libspdk_jsonrpc.so 00:04:04.346 CC lib/rpc/rpc.o 00:04:04.346 LIB libspdk_env_dpdk.a 00:04:04.346 SO libspdk_env_dpdk.so.13.0 00:04:04.603 LIB libspdk_rpc.a 00:04:04.603 SYMLINK libspdk_env_dpdk.so 00:04:04.603 SO libspdk_rpc.so.5.0 00:04:04.603 SYMLINK libspdk_rpc.so 00:04:04.862 CC lib/trace/trace.o 00:04:04.862 CC lib/notify/notify.o 00:04:04.862 CC lib/notify/notify_rpc.o 00:04:04.862 CC lib/trace/trace_flags.o 00:04:04.862 CC lib/trace/trace_rpc.o 00:04:04.862 CC lib/sock/sock.o 00:04:04.862 CC lib/sock/sock_rpc.o 00:04:04.862 LIB libspdk_notify.a 00:04:05.120 SO libspdk_notify.so.5.0 00:04:05.120 LIB libspdk_trace.a 00:04:05.120 SO libspdk_trace.so.9.0 00:04:05.120 SYMLINK libspdk_notify.so 00:04:05.120 SYMLINK libspdk_trace.so 00:04:05.120 LIB libspdk_sock.a 00:04:05.120 SO libspdk_sock.so.8.0 00:04:05.378 SYMLINK libspdk_sock.so 00:04:05.378 CC lib/thread/iobuf.o 00:04:05.378 CC lib/thread/thread.o 00:04:05.378 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:05.378 CC lib/nvme/nvme_ctrlr.o 00:04:05.378 CC lib/nvme/nvme_fabric.o 00:04:05.378 CC lib/nvme/nvme_ns_cmd.o 00:04:05.378 CC lib/nvme/nvme_ns.o 00:04:05.378 CC lib/nvme/nvme_qpair.o 00:04:05.378 CC lib/nvme/nvme_pcie_common.o 00:04:05.378 CC lib/nvme/nvme_pcie.o 00:04:05.636 CC lib/nvme/nvme.o 00:04:06.203 CC lib/nvme/nvme_quirks.o 00:04:06.203 CC lib/nvme/nvme_transport.o 00:04:06.203 CC lib/nvme/nvme_discovery.o 00:04:06.203 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:06.462 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:06.462 CC lib/nvme/nvme_tcp.o 00:04:06.462 CC lib/nvme/nvme_opal.o 00:04:06.462 CC lib/nvme/nvme_io_msg.o 00:04:06.721 CC lib/nvme/nvme_poll_group.o 00:04:06.980 CC lib/nvme/nvme_zns.o 00:04:06.980 LIB libspdk_thread.a 00:04:06.980 CC lib/nvme/nvme_cuse.o 00:04:06.980 SO libspdk_thread.so.9.0 00:04:06.980 CC lib/nvme/nvme_vfio_user.o 00:04:06.980 CC lib/nvme/nvme_rdma.o 00:04:06.980 SYMLINK libspdk_thread.so 00:04:07.239 CC lib/accel/accel.o 00:04:07.239 CC lib/blob/blobstore.o 00:04:07.239 CC lib/init/json_config.o 00:04:07.497 CC lib/init/subsystem.o 00:04:07.497 CC lib/blob/request.o 00:04:07.497 CC lib/blob/zeroes.o 00:04:07.497 CC lib/blob/blob_bs_dev.o 00:04:07.756 CC lib/init/subsystem_rpc.o 00:04:07.756 CC lib/init/rpc.o 00:04:07.756 CC lib/accel/accel_rpc.o 00:04:07.756 CC lib/accel/accel_sw.o 00:04:07.756 LIB libspdk_init.a 00:04:07.756 SO libspdk_init.so.4.0 00:04:08.015 SYMLINK libspdk_init.so 00:04:08.015 CC lib/virtio/virtio.o 00:04:08.015 CC lib/virtio/virtio_vhost_user.o 00:04:08.015 CC lib/virtio/virtio_vfio_user.o 00:04:08.015 CC lib/virtio/virtio_pci.o 00:04:08.015 CC lib/event/app.o 00:04:08.015 CC lib/event/reactor.o 00:04:08.015 CC lib/event/log_rpc.o 00:04:08.015 LIB libspdk_accel.a 00:04:08.273 CC lib/event/app_rpc.o 00:04:08.273 SO libspdk_accel.so.14.0 00:04:08.273 SYMLINK libspdk_accel.so 00:04:08.273 CC lib/event/scheduler_static.o 00:04:08.273 LIB libspdk_virtio.a 00:04:08.273 SO libspdk_virtio.so.6.0 00:04:08.273 LIB libspdk_nvme.a 00:04:08.273 CC lib/bdev/bdev.o 00:04:08.273 CC lib/bdev/bdev_rpc.o 00:04:08.273 CC lib/bdev/bdev_zone.o 00:04:08.273 SYMLINK libspdk_virtio.so 00:04:08.273 CC lib/bdev/part.o 00:04:08.532 CC lib/bdev/scsi_nvme.o 00:04:08.532 LIB libspdk_event.a 00:04:08.532 SO libspdk_event.so.12.0 00:04:08.532 SO libspdk_nvme.so.12.0 00:04:08.532 SYMLINK libspdk_event.so 00:04:08.791 SYMLINK libspdk_nvme.so 00:04:10.167 LIB libspdk_blob.a 00:04:10.167 SO libspdk_blob.so.10.1 00:04:10.167 SYMLINK libspdk_blob.so 00:04:10.167 CC lib/lvol/lvol.o 00:04:10.167 CC lib/blobfs/tree.o 00:04:10.167 CC lib/blobfs/blobfs.o 00:04:11.102 LIB libspdk_bdev.a 00:04:11.102 SO libspdk_bdev.so.14.0 00:04:11.102 LIB libspdk_blobfs.a 00:04:11.102 LIB libspdk_lvol.a 00:04:11.102 SO libspdk_blobfs.so.9.0 00:04:11.102 SO libspdk_lvol.so.9.1 00:04:11.102 SYMLINK libspdk_bdev.so 00:04:11.102 SYMLINK libspdk_blobfs.so 00:04:11.102 SYMLINK libspdk_lvol.so 00:04:11.360 CC lib/ublk/ublk.o 00:04:11.360 CC lib/ublk/ublk_rpc.o 00:04:11.360 CC lib/scsi/dev.o 00:04:11.360 CC lib/scsi/lun.o 00:04:11.360 CC lib/nvmf/ctrlr.o 00:04:11.360 CC lib/scsi/port.o 00:04:11.360 CC lib/nvmf/ctrlr_bdev.o 00:04:11.360 CC lib/nvmf/ctrlr_discovery.o 00:04:11.360 CC lib/ftl/ftl_core.o 00:04:11.360 CC lib/nbd/nbd.o 00:04:11.360 CC lib/nbd/nbd_rpc.o 00:04:11.360 CC lib/ftl/ftl_init.o 00:04:11.639 CC lib/scsi/scsi.o 00:04:11.639 CC lib/scsi/scsi_bdev.o 00:04:11.639 CC lib/ftl/ftl_layout.o 00:04:11.639 CC lib/ftl/ftl_debug.o 00:04:11.639 LIB libspdk_nbd.a 00:04:11.639 CC lib/ftl/ftl_io.o 00:04:11.639 CC lib/nvmf/subsystem.o 00:04:11.639 SO libspdk_nbd.so.6.0 00:04:11.639 CC lib/ftl/ftl_sb.o 00:04:11.639 SYMLINK libspdk_nbd.so 00:04:11.639 CC lib/nvmf/nvmf.o 00:04:11.897 CC lib/nvmf/nvmf_rpc.o 00:04:11.897 LIB libspdk_ublk.a 00:04:11.897 CC lib/ftl/ftl_l2p.o 00:04:11.897 SO libspdk_ublk.so.2.0 00:04:11.897 CC lib/ftl/ftl_l2p_flat.o 00:04:11.897 CC lib/nvmf/transport.o 00:04:11.897 CC lib/nvmf/tcp.o 00:04:11.897 SYMLINK libspdk_ublk.so 00:04:11.897 CC lib/scsi/scsi_pr.o 00:04:12.156 CC lib/nvmf/rdma.o 00:04:12.156 CC lib/ftl/ftl_nv_cache.o 00:04:12.156 CC lib/ftl/ftl_band.o 00:04:12.414 CC lib/scsi/scsi_rpc.o 00:04:12.414 CC lib/scsi/task.o 00:04:12.414 CC lib/ftl/ftl_band_ops.o 00:04:12.673 CC lib/ftl/ftl_writer.o 00:04:12.673 CC lib/ftl/ftl_rq.o 00:04:12.673 CC lib/ftl/ftl_reloc.o 00:04:12.673 LIB libspdk_scsi.a 00:04:12.673 SO libspdk_scsi.so.8.0 00:04:12.932 CC lib/ftl/ftl_l2p_cache.o 00:04:12.932 SYMLINK libspdk_scsi.so 00:04:12.932 CC lib/ftl/ftl_p2l.o 00:04:12.932 CC lib/ftl/mngt/ftl_mngt.o 00:04:12.932 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:12.932 CC lib/iscsi/conn.o 00:04:12.932 CC lib/vhost/vhost.o 00:04:12.932 CC lib/vhost/vhost_rpc.o 00:04:12.932 CC lib/vhost/vhost_scsi.o 00:04:13.190 CC lib/vhost/vhost_blk.o 00:04:13.190 CC lib/vhost/rte_vhost_user.o 00:04:13.190 CC lib/iscsi/init_grp.o 00:04:13.448 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:13.448 CC lib/iscsi/iscsi.o 00:04:13.448 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:13.448 CC lib/iscsi/md5.o 00:04:13.448 CC lib/iscsi/param.o 00:04:13.714 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:13.715 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:13.715 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:13.715 CC lib/iscsi/portal_grp.o 00:04:13.973 CC lib/iscsi/tgt_node.o 00:04:13.973 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:13.973 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:13.973 CC lib/iscsi/iscsi_subsystem.o 00:04:13.973 CC lib/iscsi/iscsi_rpc.o 00:04:14.232 CC lib/iscsi/task.o 00:04:14.232 LIB libspdk_nvmf.a 00:04:14.232 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.232 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.232 LIB libspdk_vhost.a 00:04:14.232 SO libspdk_nvmf.so.17.0 00:04:14.232 SO libspdk_vhost.so.7.1 00:04:14.232 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.232 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.232 CC lib/ftl/utils/ftl_conf.o 00:04:14.490 CC lib/ftl/utils/ftl_md.o 00:04:14.490 CC lib/ftl/utils/ftl_mempool.o 00:04:14.490 CC lib/ftl/utils/ftl_bitmap.o 00:04:14.490 SYMLINK libspdk_vhost.so 00:04:14.490 CC lib/ftl/utils/ftl_property.o 00:04:14.490 SYMLINK libspdk_nvmf.so 00:04:14.490 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:14.490 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:14.490 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:14.490 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:14.490 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:14.490 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:14.749 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:14.749 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:14.749 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:14.749 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:14.749 CC lib/ftl/base/ftl_base_dev.o 00:04:14.749 CC lib/ftl/base/ftl_base_bdev.o 00:04:14.749 CC lib/ftl/ftl_trace.o 00:04:14.749 LIB libspdk_iscsi.a 00:04:15.008 SO libspdk_iscsi.so.7.0 00:04:15.008 LIB libspdk_ftl.a 00:04:15.008 SYMLINK libspdk_iscsi.so 00:04:15.267 SO libspdk_ftl.so.8.0 00:04:15.525 SYMLINK libspdk_ftl.so 00:04:15.525 CC module/env_dpdk/env_dpdk_rpc.o 00:04:15.785 CC module/accel/ioat/accel_ioat.o 00:04:15.785 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:15.785 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:15.785 CC module/accel/dsa/accel_dsa.o 00:04:15.785 CC module/blob/bdev/blob_bdev.o 00:04:15.785 CC module/accel/error/accel_error.o 00:04:15.785 CC module/scheduler/gscheduler/gscheduler.o 00:04:15.785 CC module/accel/iaa/accel_iaa.o 00:04:15.785 CC module/sock/posix/posix.o 00:04:15.785 LIB libspdk_env_dpdk_rpc.a 00:04:15.785 SO libspdk_env_dpdk_rpc.so.5.0 00:04:15.785 LIB libspdk_scheduler_dpdk_governor.a 00:04:15.785 LIB libspdk_scheduler_gscheduler.a 00:04:15.785 SYMLINK libspdk_env_dpdk_rpc.so 00:04:15.785 SO libspdk_scheduler_gscheduler.so.3.0 00:04:15.785 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:15.785 CC module/accel/iaa/accel_iaa_rpc.o 00:04:15.785 CC module/accel/ioat/accel_ioat_rpc.o 00:04:15.785 CC module/accel/error/accel_error_rpc.o 00:04:15.785 LIB libspdk_scheduler_dynamic.a 00:04:16.063 CC module/accel/dsa/accel_dsa_rpc.o 00:04:16.063 SYMLINK libspdk_scheduler_gscheduler.so 00:04:16.063 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:16.063 SO libspdk_scheduler_dynamic.so.3.0 00:04:16.063 LIB libspdk_blob_bdev.a 00:04:16.063 SYMLINK libspdk_scheduler_dynamic.so 00:04:16.063 SO libspdk_blob_bdev.so.10.1 00:04:16.063 LIB libspdk_accel_iaa.a 00:04:16.063 LIB libspdk_accel_ioat.a 00:04:16.063 LIB libspdk_accel_error.a 00:04:16.063 SYMLINK libspdk_blob_bdev.so 00:04:16.063 CC module/sock/uring/uring.o 00:04:16.063 SO libspdk_accel_iaa.so.2.0 00:04:16.063 SO libspdk_accel_ioat.so.5.0 00:04:16.063 LIB libspdk_accel_dsa.a 00:04:16.063 SO libspdk_accel_error.so.1.0 00:04:16.063 SO libspdk_accel_dsa.so.4.0 00:04:16.063 SYMLINK libspdk_accel_iaa.so 00:04:16.063 SYMLINK libspdk_accel_ioat.so 00:04:16.063 SYMLINK libspdk_accel_error.so 00:04:16.063 SYMLINK libspdk_accel_dsa.so 00:04:16.345 CC module/bdev/gpt/gpt.o 00:04:16.345 CC module/blobfs/bdev/blobfs_bdev.o 00:04:16.345 CC module/bdev/delay/vbdev_delay.o 00:04:16.345 CC module/bdev/error/vbdev_error.o 00:04:16.345 CC module/bdev/lvol/vbdev_lvol.o 00:04:16.345 CC module/bdev/malloc/bdev_malloc.o 00:04:16.345 CC module/bdev/null/bdev_null.o 00:04:16.345 CC module/bdev/nvme/bdev_nvme.o 00:04:16.345 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:16.345 CC module/bdev/gpt/vbdev_gpt.o 00:04:16.345 LIB libspdk_sock_posix.a 00:04:16.609 SO libspdk_sock_posix.so.5.0 00:04:16.609 CC module/bdev/error/vbdev_error_rpc.o 00:04:16.609 CC module/bdev/null/bdev_null_rpc.o 00:04:16.609 SYMLINK libspdk_sock_posix.so 00:04:16.609 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:16.609 LIB libspdk_blobfs_bdev.a 00:04:16.609 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:16.609 SO libspdk_blobfs_bdev.so.5.0 00:04:16.609 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:16.609 SYMLINK libspdk_blobfs_bdev.so 00:04:16.609 CC module/bdev/nvme/nvme_rpc.o 00:04:16.609 LIB libspdk_bdev_error.a 00:04:16.609 LIB libspdk_bdev_gpt.a 00:04:16.609 LIB libspdk_bdev_null.a 00:04:16.868 SO libspdk_bdev_error.so.5.0 00:04:16.868 SO libspdk_bdev_gpt.so.5.0 00:04:16.868 LIB libspdk_sock_uring.a 00:04:16.868 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:16.868 SO libspdk_bdev_null.so.5.0 00:04:16.868 SO libspdk_sock_uring.so.4.0 00:04:16.868 SYMLINK libspdk_bdev_error.so 00:04:16.868 LIB libspdk_bdev_delay.a 00:04:16.868 SYMLINK libspdk_bdev_gpt.so 00:04:16.868 LIB libspdk_bdev_malloc.a 00:04:16.868 SO libspdk_bdev_delay.so.5.0 00:04:16.868 SYMLINK libspdk_sock_uring.so 00:04:16.868 SYMLINK libspdk_bdev_null.so 00:04:16.868 SO libspdk_bdev_malloc.so.5.0 00:04:16.868 SYMLINK libspdk_bdev_delay.so 00:04:16.868 CC module/bdev/passthru/vbdev_passthru.o 00:04:16.868 CC module/bdev/raid/bdev_raid.o 00:04:16.868 SYMLINK libspdk_bdev_malloc.so 00:04:16.868 CC module/bdev/nvme/bdev_mdns_client.o 00:04:16.868 CC module/bdev/split/vbdev_split.o 00:04:16.868 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:17.126 CC module/bdev/uring/bdev_uring.o 00:04:17.126 CC module/bdev/aio/bdev_aio.o 00:04:17.126 LIB libspdk_bdev_lvol.a 00:04:17.126 SO libspdk_bdev_lvol.so.5.0 00:04:17.126 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.126 SYMLINK libspdk_bdev_lvol.so 00:04:17.126 CC module/bdev/uring/bdev_uring_rpc.o 00:04:17.126 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.126 CC module/bdev/split/vbdev_split_rpc.o 00:04:17.126 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:17.385 CC module/bdev/nvme/vbdev_opal.o 00:04:17.385 LIB libspdk_bdev_zone_block.a 00:04:17.385 SO libspdk_bdev_zone_block.so.5.0 00:04:17.385 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.385 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.385 LIB libspdk_bdev_split.a 00:04:17.385 LIB libspdk_bdev_uring.a 00:04:17.385 LIB libspdk_bdev_aio.a 00:04:17.385 SYMLINK libspdk_bdev_zone_block.so 00:04:17.385 SO libspdk_bdev_split.so.5.0 00:04:17.385 SO libspdk_bdev_uring.so.5.0 00:04:17.385 LIB libspdk_bdev_passthru.a 00:04:17.385 SO libspdk_bdev_aio.so.5.0 00:04:17.385 SO libspdk_bdev_passthru.so.5.0 00:04:17.385 SYMLINK libspdk_bdev_split.so 00:04:17.385 SYMLINK libspdk_bdev_uring.so 00:04:17.385 SYMLINK libspdk_bdev_aio.so 00:04:17.385 CC module/bdev/raid/raid0.o 00:04:17.385 CC module/bdev/ftl/bdev_ftl.o 00:04:17.644 SYMLINK libspdk_bdev_passthru.so 00:04:17.644 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:17.644 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.644 CC module/bdev/iscsi/bdev_iscsi.o 00:04:17.644 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:17.644 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:17.644 CC module/bdev/raid/raid1.o 00:04:17.644 CC module/bdev/raid/concat.o 00:04:17.644 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:17.644 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:17.644 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:17.903 LIB libspdk_bdev_ftl.a 00:04:17.903 SO libspdk_bdev_ftl.so.5.0 00:04:17.903 SYMLINK libspdk_bdev_ftl.so 00:04:17.903 LIB libspdk_bdev_iscsi.a 00:04:17.903 LIB libspdk_bdev_raid.a 00:04:17.903 SO libspdk_bdev_iscsi.so.5.0 00:04:17.903 SO libspdk_bdev_raid.so.5.0 00:04:18.161 SYMLINK libspdk_bdev_iscsi.so 00:04:18.161 LIB libspdk_bdev_virtio.a 00:04:18.161 SYMLINK libspdk_bdev_raid.so 00:04:18.161 SO libspdk_bdev_virtio.so.5.0 00:04:18.161 SYMLINK libspdk_bdev_virtio.so 00:04:18.728 LIB libspdk_bdev_nvme.a 00:04:18.728 SO libspdk_bdev_nvme.so.6.0 00:04:18.728 SYMLINK libspdk_bdev_nvme.so 00:04:18.987 CC module/event/subsystems/iobuf/iobuf.o 00:04:18.987 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:18.987 CC module/event/subsystems/scheduler/scheduler.o 00:04:18.987 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:18.987 CC module/event/subsystems/sock/sock.o 00:04:18.987 CC module/event/subsystems/vmd/vmd.o 00:04:18.987 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:19.245 LIB libspdk_event_sock.a 00:04:19.245 LIB libspdk_event_scheduler.a 00:04:19.245 LIB libspdk_event_vhost_blk.a 00:04:19.245 SO libspdk_event_sock.so.4.0 00:04:19.245 SO libspdk_event_scheduler.so.3.0 00:04:19.245 LIB libspdk_event_iobuf.a 00:04:19.245 LIB libspdk_event_vmd.a 00:04:19.245 SO libspdk_event_vhost_blk.so.2.0 00:04:19.245 SO libspdk_event_iobuf.so.2.0 00:04:19.245 SO libspdk_event_vmd.so.5.0 00:04:19.245 SYMLINK libspdk_event_sock.so 00:04:19.245 SYMLINK libspdk_event_scheduler.so 00:04:19.245 SYMLINK libspdk_event_vhost_blk.so 00:04:19.245 SYMLINK libspdk_event_vmd.so 00:04:19.245 SYMLINK libspdk_event_iobuf.so 00:04:19.504 CC module/event/subsystems/accel/accel.o 00:04:19.504 LIB libspdk_event_accel.a 00:04:19.504 SO libspdk_event_accel.so.5.0 00:04:19.763 SYMLINK libspdk_event_accel.so 00:04:19.763 CC module/event/subsystems/bdev/bdev.o 00:04:20.023 LIB libspdk_event_bdev.a 00:04:20.024 SO libspdk_event_bdev.so.5.0 00:04:20.283 SYMLINK libspdk_event_bdev.so 00:04:20.283 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:20.283 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:20.283 CC module/event/subsystems/scsi/scsi.o 00:04:20.283 CC module/event/subsystems/nbd/nbd.o 00:04:20.283 CC module/event/subsystems/ublk/ublk.o 00:04:20.542 LIB libspdk_event_nbd.a 00:04:20.542 SO libspdk_event_nbd.so.5.0 00:04:20.542 LIB libspdk_event_ublk.a 00:04:20.542 LIB libspdk_event_scsi.a 00:04:20.542 SO libspdk_event_ublk.so.2.0 00:04:20.542 SO libspdk_event_scsi.so.5.0 00:04:20.542 SYMLINK libspdk_event_nbd.so 00:04:20.542 LIB libspdk_event_nvmf.a 00:04:20.542 SYMLINK libspdk_event_ublk.so 00:04:20.542 SYMLINK libspdk_event_scsi.so 00:04:20.542 SO libspdk_event_nvmf.so.5.0 00:04:20.801 SYMLINK libspdk_event_nvmf.so 00:04:20.801 CC module/event/subsystems/iscsi/iscsi.o 00:04:20.801 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:20.801 LIB libspdk_event_vhost_scsi.a 00:04:21.060 LIB libspdk_event_iscsi.a 00:04:21.060 SO libspdk_event_vhost_scsi.so.2.0 00:04:21.060 SO libspdk_event_iscsi.so.5.0 00:04:21.060 SYMLINK libspdk_event_vhost_scsi.so 00:04:21.060 SYMLINK libspdk_event_iscsi.so 00:04:21.060 SO libspdk.so.5.0 00:04:21.060 SYMLINK libspdk.so 00:04:21.320 CXX app/trace/trace.o 00:04:21.320 CC examples/ioat/perf/perf.o 00:04:21.320 CC examples/nvme/hello_world/hello_world.o 00:04:21.320 CC examples/accel/perf/accel_perf.o 00:04:21.320 CC examples/sock/hello_world/hello_sock.o 00:04:21.320 CC examples/vmd/lsvmd/lsvmd.o 00:04:21.320 CC test/accel/dif/dif.o 00:04:21.320 CC examples/bdev/hello_world/hello_bdev.o 00:04:21.320 CC examples/nvmf/nvmf/nvmf.o 00:04:21.320 CC examples/blob/hello_world/hello_blob.o 00:04:21.578 LINK lsvmd 00:04:21.578 LINK ioat_perf 00:04:21.578 LINK hello_world 00:04:21.578 LINK hello_sock 00:04:21.578 LINK hello_bdev 00:04:21.578 LINK hello_blob 00:04:21.838 LINK spdk_trace 00:04:21.838 LINK nvmf 00:04:21.838 CC examples/vmd/led/led.o 00:04:21.838 LINK dif 00:04:21.838 CC examples/ioat/verify/verify.o 00:04:21.838 LINK accel_perf 00:04:21.838 CC examples/nvme/reconnect/reconnect.o 00:04:21.838 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.838 LINK led 00:04:22.097 CC app/trace_record/trace_record.o 00:04:22.097 CC examples/blob/cli/blobcli.o 00:04:22.097 CC examples/util/zipf/zipf.o 00:04:22.097 LINK verify 00:04:22.097 CC examples/thread/thread/thread_ex.o 00:04:22.097 CC test/app/bdev_svc/bdev_svc.o 00:04:22.097 CC test/bdev/bdevio/bdevio.o 00:04:22.097 LINK zipf 00:04:22.097 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.097 LINK reconnect 00:04:22.356 LINK spdk_trace_record 00:04:22.356 CC examples/nvme/arbitration/arbitration.o 00:04:22.356 LINK bdev_svc 00:04:22.356 LINK thread 00:04:22.356 TEST_HEADER include/spdk/accel.h 00:04:22.356 TEST_HEADER include/spdk/accel_module.h 00:04:22.356 TEST_HEADER include/spdk/assert.h 00:04:22.356 TEST_HEADER include/spdk/barrier.h 00:04:22.356 TEST_HEADER include/spdk/base64.h 00:04:22.356 TEST_HEADER include/spdk/bdev.h 00:04:22.356 TEST_HEADER include/spdk/bdev_module.h 00:04:22.356 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.356 TEST_HEADER include/spdk/bit_array.h 00:04:22.356 TEST_HEADER include/spdk/bit_pool.h 00:04:22.356 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.356 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.356 TEST_HEADER include/spdk/blobfs.h 00:04:22.356 TEST_HEADER include/spdk/blob.h 00:04:22.356 TEST_HEADER include/spdk/conf.h 00:04:22.356 TEST_HEADER include/spdk/config.h 00:04:22.356 TEST_HEADER include/spdk/cpuset.h 00:04:22.356 CC test/blobfs/mkfs/mkfs.o 00:04:22.356 TEST_HEADER include/spdk/crc16.h 00:04:22.356 CC app/nvmf_tgt/nvmf_main.o 00:04:22.356 TEST_HEADER include/spdk/crc32.h 00:04:22.356 TEST_HEADER include/spdk/crc64.h 00:04:22.356 TEST_HEADER include/spdk/dif.h 00:04:22.356 TEST_HEADER include/spdk/dma.h 00:04:22.615 TEST_HEADER include/spdk/endian.h 00:04:22.615 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.615 TEST_HEADER include/spdk/env.h 00:04:22.615 TEST_HEADER include/spdk/event.h 00:04:22.615 TEST_HEADER include/spdk/fd_group.h 00:04:22.615 LINK blobcli 00:04:22.615 TEST_HEADER include/spdk/fd.h 00:04:22.615 TEST_HEADER include/spdk/file.h 00:04:22.615 TEST_HEADER include/spdk/ftl.h 00:04:22.615 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.615 LINK bdevio 00:04:22.615 TEST_HEADER include/spdk/hexlify.h 00:04:22.615 TEST_HEADER include/spdk/histogram_data.h 00:04:22.615 TEST_HEADER include/spdk/idxd.h 00:04:22.615 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.615 TEST_HEADER include/spdk/init.h 00:04:22.615 TEST_HEADER include/spdk/ioat.h 00:04:22.615 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.615 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.615 TEST_HEADER include/spdk/json.h 00:04:22.615 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.615 TEST_HEADER include/spdk/likely.h 00:04:22.615 TEST_HEADER include/spdk/log.h 00:04:22.615 LINK arbitration 00:04:22.615 TEST_HEADER include/spdk/lvol.h 00:04:22.615 TEST_HEADER include/spdk/memory.h 00:04:22.615 TEST_HEADER include/spdk/mmio.h 00:04:22.615 TEST_HEADER include/spdk/nbd.h 00:04:22.615 TEST_HEADER include/spdk/notify.h 00:04:22.615 TEST_HEADER include/spdk/nvme.h 00:04:22.615 TEST_HEADER include/spdk/nvme_intel.h 00:04:22.615 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:22.615 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:22.615 TEST_HEADER include/spdk/nvme_spec.h 00:04:22.615 TEST_HEADER include/spdk/nvme_zns.h 00:04:22.615 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:22.615 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:22.615 CC test/app/histogram_perf/histogram_perf.o 00:04:22.615 TEST_HEADER include/spdk/nvmf.h 00:04:22.615 TEST_HEADER include/spdk/nvmf_spec.h 00:04:22.615 TEST_HEADER include/spdk/nvmf_transport.h 00:04:22.615 TEST_HEADER include/spdk/opal.h 00:04:22.615 TEST_HEADER include/spdk/opal_spec.h 00:04:22.615 TEST_HEADER include/spdk/pci_ids.h 00:04:22.615 LINK nvmf_tgt 00:04:22.615 TEST_HEADER include/spdk/pipe.h 00:04:22.615 TEST_HEADER include/spdk/queue.h 00:04:22.615 TEST_HEADER include/spdk/reduce.h 00:04:22.615 TEST_HEADER include/spdk/rpc.h 00:04:22.615 TEST_HEADER include/spdk/scheduler.h 00:04:22.615 TEST_HEADER include/spdk/scsi.h 00:04:22.615 TEST_HEADER include/spdk/scsi_spec.h 00:04:22.615 TEST_HEADER include/spdk/sock.h 00:04:22.615 TEST_HEADER include/spdk/stdinc.h 00:04:22.615 TEST_HEADER include/spdk/string.h 00:04:22.615 TEST_HEADER include/spdk/thread.h 00:04:22.615 TEST_HEADER include/spdk/trace.h 00:04:22.615 TEST_HEADER include/spdk/trace_parser.h 00:04:22.615 TEST_HEADER include/spdk/tree.h 00:04:22.615 TEST_HEADER include/spdk/ublk.h 00:04:22.615 LINK nvme_manage 00:04:22.615 TEST_HEADER include/spdk/util.h 00:04:22.615 TEST_HEADER include/spdk/uuid.h 00:04:22.615 LINK mkfs 00:04:22.615 TEST_HEADER include/spdk/version.h 00:04:22.615 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:22.615 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:22.615 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:22.615 TEST_HEADER include/spdk/vhost.h 00:04:22.615 TEST_HEADER include/spdk/vmd.h 00:04:22.615 LINK bdevperf 00:04:22.615 TEST_HEADER include/spdk/xor.h 00:04:22.615 TEST_HEADER include/spdk/zipf.h 00:04:22.615 CXX test/cpp_headers/accel.o 00:04:22.874 CC test/app/jsoncat/jsoncat.o 00:04:22.874 LINK histogram_perf 00:04:22.874 CC examples/nvme/hotplug/hotplug.o 00:04:22.874 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:22.874 CXX test/cpp_headers/accel_module.o 00:04:22.874 CC test/app/stub/stub.o 00:04:22.874 LINK jsoncat 00:04:22.874 CC app/iscsi_tgt/iscsi_tgt.o 00:04:23.133 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:23.133 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:23.133 LINK cmb_copy 00:04:23.133 CC examples/idxd/perf/perf.o 00:04:23.133 LINK hotplug 00:04:23.133 CXX test/cpp_headers/assert.o 00:04:23.133 LINK stub 00:04:23.133 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:23.133 LINK nvme_fuzz 00:04:23.133 LINK iscsi_tgt 00:04:23.133 LINK interrupt_tgt 00:04:23.133 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:23.133 CXX test/cpp_headers/barrier.o 00:04:23.133 CC examples/nvme/abort/abort.o 00:04:23.391 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:23.392 CC app/spdk_lspci/spdk_lspci.o 00:04:23.392 LINK idxd_perf 00:04:23.392 CXX test/cpp_headers/base64.o 00:04:23.392 CC app/spdk_tgt/spdk_tgt.o 00:04:23.392 CC app/spdk_nvme_perf/perf.o 00:04:23.392 CC app/spdk_nvme_identify/identify.o 00:04:23.392 CXX test/cpp_headers/bdev.o 00:04:23.392 LINK pmr_persistence 00:04:23.392 LINK spdk_lspci 00:04:23.650 LINK abort 00:04:23.650 LINK vhost_fuzz 00:04:23.650 LINK spdk_tgt 00:04:23.650 CC app/spdk_nvme_discover/discovery_aer.o 00:04:23.650 CXX test/cpp_headers/bdev_module.o 00:04:23.650 CC app/spdk_top/spdk_top.o 00:04:23.909 CC test/dma/test_dma/test_dma.o 00:04:23.909 LINK spdk_nvme_discover 00:04:23.909 CC app/vhost/vhost.o 00:04:23.909 CXX test/cpp_headers/bdev_zone.o 00:04:23.909 CC app/spdk_dd/spdk_dd.o 00:04:23.909 CC test/env/mem_callbacks/mem_callbacks.o 00:04:24.167 LINK vhost 00:04:24.167 CC app/fio/nvme/fio_plugin.o 00:04:24.167 CXX test/cpp_headers/bit_array.o 00:04:24.167 LINK test_dma 00:04:24.167 LINK spdk_nvme_identify 00:04:24.167 LINK spdk_nvme_perf 00:04:24.424 LINK spdk_dd 00:04:24.424 CXX test/cpp_headers/bit_pool.o 00:04:24.424 CC test/event/event_perf/event_perf.o 00:04:24.424 CXX test/cpp_headers/blob_bdev.o 00:04:24.424 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.424 CXX test/cpp_headers/blobfs.o 00:04:24.424 CXX test/cpp_headers/blob.o 00:04:24.424 LINK event_perf 00:04:24.424 CC test/env/vtophys/vtophys.o 00:04:24.682 LINK mem_callbacks 00:04:24.682 LINK spdk_top 00:04:24.682 CXX test/cpp_headers/conf.o 00:04:24.682 CC test/event/reactor/reactor.o 00:04:24.682 CC test/event/reactor_perf/reactor_perf.o 00:04:24.682 LINK spdk_nvme 00:04:24.682 LINK iscsi_fuzz 00:04:24.682 LINK vtophys 00:04:24.682 CC test/rpc_client/rpc_client_test.o 00:04:24.682 LINK reactor 00:04:24.682 CC test/nvme/aer/aer.o 00:04:24.682 CXX test/cpp_headers/config.o 00:04:24.682 CC test/nvme/reset/reset.o 00:04:24.940 CXX test/cpp_headers/cpuset.o 00:04:24.940 LINK reactor_perf 00:04:24.940 CC test/lvol/esnap/esnap.o 00:04:24.940 CC app/fio/bdev/fio_plugin.o 00:04:24.940 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:24.940 LINK rpc_client_test 00:04:24.940 CC test/env/memory/memory_ut.o 00:04:24.940 CXX test/cpp_headers/crc16.o 00:04:24.940 CC test/event/app_repeat/app_repeat.o 00:04:24.940 CC test/nvme/sgl/sgl.o 00:04:25.198 LINK reset 00:04:25.198 LINK env_dpdk_post_init 00:04:25.198 LINK aer 00:04:25.198 CXX test/cpp_headers/crc32.o 00:04:25.198 LINK app_repeat 00:04:25.198 CC test/thread/poller_perf/poller_perf.o 00:04:25.198 CXX test/cpp_headers/crc64.o 00:04:25.198 CXX test/cpp_headers/dif.o 00:04:25.198 CXX test/cpp_headers/dma.o 00:04:25.198 CC test/nvme/e2edp/nvme_dp.o 00:04:25.198 LINK sgl 00:04:25.456 LINK spdk_bdev 00:04:25.456 LINK poller_perf 00:04:25.456 CXX test/cpp_headers/endian.o 00:04:25.456 CC test/event/scheduler/scheduler.o 00:04:25.456 CXX test/cpp_headers/env_dpdk.o 00:04:25.456 CC test/nvme/overhead/overhead.o 00:04:25.456 CC test/nvme/err_injection/err_injection.o 00:04:25.456 CC test/nvme/startup/startup.o 00:04:25.714 CC test/env/pci/pci_ut.o 00:04:25.714 CXX test/cpp_headers/env.o 00:04:25.714 CXX test/cpp_headers/event.o 00:04:25.714 LINK nvme_dp 00:04:25.714 LINK scheduler 00:04:25.714 LINK err_injection 00:04:25.714 LINK startup 00:04:25.714 CXX test/cpp_headers/fd_group.o 00:04:25.714 CC test/nvme/reserve/reserve.o 00:04:25.714 CC test/nvme/simple_copy/simple_copy.o 00:04:25.973 LINK overhead 00:04:25.973 LINK memory_ut 00:04:25.973 CXX test/cpp_headers/fd.o 00:04:25.973 CC test/nvme/connect_stress/connect_stress.o 00:04:25.973 LINK pci_ut 00:04:25.973 CC test/nvme/boot_partition/boot_partition.o 00:04:25.973 CC test/nvme/compliance/nvme_compliance.o 00:04:25.973 LINK reserve 00:04:25.973 CC test/nvme/fused_ordering/fused_ordering.o 00:04:25.973 LINK simple_copy 00:04:25.973 CXX test/cpp_headers/file.o 00:04:26.231 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.231 LINK connect_stress 00:04:26.231 LINK boot_partition 00:04:26.231 CXX test/cpp_headers/ftl.o 00:04:26.231 CXX test/cpp_headers/gpt_spec.o 00:04:26.231 CC test/nvme/fdp/fdp.o 00:04:26.231 LINK fused_ordering 00:04:26.231 CC test/nvme/cuse/cuse.o 00:04:26.231 LINK doorbell_aers 00:04:26.231 CXX test/cpp_headers/hexlify.o 00:04:26.231 CXX test/cpp_headers/histogram_data.o 00:04:26.231 LINK nvme_compliance 00:04:26.490 CXX test/cpp_headers/idxd.o 00:04:26.490 CXX test/cpp_headers/idxd_spec.o 00:04:26.490 CXX test/cpp_headers/init.o 00:04:26.490 CXX test/cpp_headers/ioat.o 00:04:26.490 CXX test/cpp_headers/ioat_spec.o 00:04:26.490 CXX test/cpp_headers/iscsi_spec.o 00:04:26.490 CXX test/cpp_headers/json.o 00:04:26.490 LINK fdp 00:04:26.490 CXX test/cpp_headers/jsonrpc.o 00:04:26.490 CXX test/cpp_headers/likely.o 00:04:26.490 CXX test/cpp_headers/log.o 00:04:26.749 CXX test/cpp_headers/lvol.o 00:04:26.749 CXX test/cpp_headers/memory.o 00:04:26.749 CXX test/cpp_headers/mmio.o 00:04:26.749 CXX test/cpp_headers/nbd.o 00:04:26.749 CXX test/cpp_headers/notify.o 00:04:26.749 CXX test/cpp_headers/nvme.o 00:04:26.749 CXX test/cpp_headers/nvme_intel.o 00:04:26.749 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.749 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.749 CXX test/cpp_headers/nvme_spec.o 00:04:26.749 CXX test/cpp_headers/nvme_zns.o 00:04:26.749 CXX test/cpp_headers/nvmf_cmd.o 00:04:26.749 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.008 CXX test/cpp_headers/nvmf.o 00:04:27.008 CXX test/cpp_headers/nvmf_spec.o 00:04:27.008 CXX test/cpp_headers/nvmf_transport.o 00:04:27.008 CXX test/cpp_headers/opal.o 00:04:27.008 CXX test/cpp_headers/opal_spec.o 00:04:27.008 CXX test/cpp_headers/pci_ids.o 00:04:27.008 CXX test/cpp_headers/pipe.o 00:04:27.008 CXX test/cpp_headers/queue.o 00:04:27.008 CXX test/cpp_headers/reduce.o 00:04:27.008 CXX test/cpp_headers/rpc.o 00:04:27.008 CXX test/cpp_headers/scheduler.o 00:04:27.266 CXX test/cpp_headers/scsi.o 00:04:27.266 CXX test/cpp_headers/scsi_spec.o 00:04:27.266 CXX test/cpp_headers/sock.o 00:04:27.266 CXX test/cpp_headers/stdinc.o 00:04:27.266 CXX test/cpp_headers/string.o 00:04:27.266 CXX test/cpp_headers/thread.o 00:04:27.266 CXX test/cpp_headers/trace.o 00:04:27.266 CXX test/cpp_headers/trace_parser.o 00:04:27.266 LINK cuse 00:04:27.266 CXX test/cpp_headers/tree.o 00:04:27.266 CXX test/cpp_headers/ublk.o 00:04:27.266 CXX test/cpp_headers/util.o 00:04:27.266 CXX test/cpp_headers/uuid.o 00:04:27.266 CXX test/cpp_headers/version.o 00:04:27.266 CXX test/cpp_headers/vfio_user_pci.o 00:04:27.266 CXX test/cpp_headers/vfio_user_spec.o 00:04:27.525 CXX test/cpp_headers/vhost.o 00:04:27.525 CXX test/cpp_headers/vmd.o 00:04:27.525 CXX test/cpp_headers/xor.o 00:04:27.525 CXX test/cpp_headers/zipf.o 00:04:29.425 LINK esnap 00:04:29.425 00:04:29.425 real 0m50.656s 00:04:29.425 user 4m53.576s 00:04:29.425 sys 0m56.548s 00:04:29.425 11:00:40 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:29.425 ************************************ 00:04:29.425 END TEST make 00:04:29.425 ************************************ 00:04:29.425 11:00:40 -- common/autotest_common.sh@10 -- $ set +x 00:04:29.425 11:00:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:29.425 11:00:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:29.425 11:00:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:29.684 11:00:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:29.684 11:00:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:29.684 11:00:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:29.684 11:00:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:29.684 11:00:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:29.684 11:00:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:29.684 11:00:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.684 11:00:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:29.684 11:00:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:29.684 11:00:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:29.684 11:00:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:29.684 11:00:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:29.684 11:00:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:29.684 11:00:40 -- scripts/common.sh@344 -- # : 1 00:04:29.684 11:00:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:29.684 11:00:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.684 11:00:40 -- scripts/common.sh@364 -- # decimal 1 00:04:29.684 11:00:40 -- scripts/common.sh@352 -- # local d=1 00:04:29.684 11:00:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.684 11:00:40 -- scripts/common.sh@354 -- # echo 1 00:04:29.684 11:00:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:29.684 11:00:40 -- scripts/common.sh@365 -- # decimal 2 00:04:29.684 11:00:40 -- scripts/common.sh@352 -- # local d=2 00:04:29.684 11:00:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.684 11:00:40 -- scripts/common.sh@354 -- # echo 2 00:04:29.684 11:00:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:29.684 11:00:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:29.684 11:00:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:29.684 11:00:40 -- scripts/common.sh@367 -- # return 0 00:04:29.684 11:00:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.684 11:00:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.684 --rc genhtml_branch_coverage=1 00:04:29.684 --rc genhtml_function_coverage=1 00:04:29.684 --rc genhtml_legend=1 00:04:29.684 --rc geninfo_all_blocks=1 00:04:29.684 --rc geninfo_unexecuted_blocks=1 00:04:29.684 00:04:29.684 ' 00:04:29.684 11:00:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.684 --rc genhtml_branch_coverage=1 00:04:29.684 --rc genhtml_function_coverage=1 00:04:29.684 --rc genhtml_legend=1 00:04:29.684 --rc geninfo_all_blocks=1 00:04:29.684 --rc geninfo_unexecuted_blocks=1 00:04:29.684 00:04:29.684 ' 00:04:29.684 11:00:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.684 --rc genhtml_branch_coverage=1 00:04:29.684 --rc genhtml_function_coverage=1 00:04:29.684 --rc genhtml_legend=1 00:04:29.684 --rc geninfo_all_blocks=1 00:04:29.684 --rc geninfo_unexecuted_blocks=1 00:04:29.684 00:04:29.684 ' 00:04:29.684 11:00:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.684 --rc genhtml_branch_coverage=1 00:04:29.684 --rc genhtml_function_coverage=1 00:04:29.684 --rc genhtml_legend=1 00:04:29.684 --rc geninfo_all_blocks=1 00:04:29.684 --rc geninfo_unexecuted_blocks=1 00:04:29.684 00:04:29.684 ' 00:04:29.684 11:00:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.684 11:00:40 -- nvmf/common.sh@7 -- # uname -s 00:04:29.684 11:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.684 11:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.684 11:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.684 11:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.684 11:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.684 11:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.684 11:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.684 11:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.684 11:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.684 11:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.684 11:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:04:29.684 11:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:04:29.684 11:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.684 11:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.684 11:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:29.684 11:00:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.684 11:00:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.684 11:00:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.684 11:00:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.684 11:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.684 11:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.684 11:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.684 11:00:40 -- paths/export.sh@5 -- # export PATH 00:04:29.684 11:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.684 11:00:40 -- nvmf/common.sh@46 -- # : 0 00:04:29.684 11:00:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:29.684 11:00:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:29.684 11:00:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:29.684 11:00:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.684 11:00:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.684 11:00:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:29.684 11:00:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:29.684 11:00:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:29.684 11:00:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:29.684 11:00:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:29.684 11:00:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:29.684 11:00:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:29.684 11:00:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.684 11:00:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:29.684 11:00:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.684 11:00:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:29.684 11:00:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:29.684 11:00:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:29.684 11:00:40 -- spdk/autotest.sh@48 -- # udevadm_pid=60093 00:04:29.684 11:00:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:29.684 11:00:40 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:29.684 11:00:40 -- spdk/autotest.sh@54 -- # echo 60096 00:04:29.684 11:00:40 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:29.684 11:00:40 -- spdk/autotest.sh@56 -- # echo 60097 00:04:29.684 11:00:40 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:29.684 11:00:40 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:29.684 11:00:40 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:29.684 11:00:40 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:29.684 11:00:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.684 11:00:40 -- common/autotest_common.sh@10 -- # set +x 00:04:29.684 11:00:40 -- spdk/autotest.sh@70 -- # create_test_list 00:04:29.684 11:00:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:29.684 11:00:40 -- common/autotest_common.sh@10 -- # set +x 00:04:29.684 11:00:40 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:29.684 11:00:40 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:29.684 11:00:40 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:29.684 11:00:40 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:29.684 11:00:40 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:29.684 11:00:40 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:29.684 11:00:40 -- common/autotest_common.sh@1450 -- # uname 00:04:29.684 11:00:40 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:29.684 11:00:40 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:29.684 11:00:40 -- common/autotest_common.sh@1470 -- # uname 00:04:29.684 11:00:40 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:29.684 11:00:40 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:29.685 11:00:40 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:29.943 lcov: LCOV version 1.15 00:04:29.943 11:00:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:38.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:38.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:38.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:38.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:38.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:38.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:56.150 11:01:07 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:56.150 11:01:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.150 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.150 11:01:07 -- spdk/autotest.sh@89 -- # rm -f 00:04:56.150 11:01:07 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.783 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:56.783 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:56.783 11:01:07 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:56.783 11:01:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:56.783 11:01:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:56.783 11:01:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:56.783 11:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:56.783 11:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:56.783 11:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:56.783 11:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:56.783 11:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:56.783 11:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:56.783 11:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:56.783 11:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:56.783 11:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:56.783 11:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:56.783 11:01:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:56.783 11:01:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:56.783 11:01:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:56.783 11:01:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:56.783 11:01:07 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:56.783 11:01:07 -- spdk/autotest.sh@108 -- # grep -v p 00:04:56.783 11:01:07 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:56.783 11:01:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:56.783 11:01:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:56.783 11:01:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:56.783 11:01:07 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:56.783 11:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:56.783 No valid GPT data, bailing 00:04:56.783 11:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.783 11:01:07 -- scripts/common.sh@393 -- # pt= 00:04:56.783 11:01:07 -- scripts/common.sh@394 -- # return 1 00:04:56.783 11:01:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:56.783 1+0 records in 00:04:56.783 1+0 records out 00:04:56.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465692 s, 225 MB/s 00:04:56.783 11:01:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:56.783 11:01:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:56.783 11:01:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:56.783 11:01:07 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:56.783 11:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:57.053 No valid GPT data, bailing 00:04:57.053 11:01:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:57.053 11:01:07 -- scripts/common.sh@393 -- # pt= 00:04:57.053 11:01:07 -- scripts/common.sh@394 -- # return 1 00:04:57.053 11:01:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:57.053 1+0 records in 00:04:57.053 1+0 records out 00:04:57.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387679 s, 270 MB/s 00:04:57.053 11:01:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.053 11:01:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.053 11:01:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:57.053 11:01:07 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:57.053 11:01:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:57.053 No valid GPT data, bailing 00:04:57.053 11:01:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:57.053 11:01:08 -- scripts/common.sh@393 -- # pt= 00:04:57.053 11:01:08 -- scripts/common.sh@394 -- # return 1 00:04:57.053 11:01:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:57.053 1+0 records in 00:04:57.053 1+0 records out 00:04:57.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045946 s, 228 MB/s 00:04:57.053 11:01:08 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.053 11:01:08 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.053 11:01:08 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:57.053 11:01:08 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:57.053 11:01:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:57.053 No valid GPT data, bailing 00:04:57.053 11:01:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:57.053 11:01:08 -- scripts/common.sh@393 -- # pt= 00:04:57.053 11:01:08 -- scripts/common.sh@394 -- # return 1 00:04:57.053 11:01:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:57.053 1+0 records in 00:04:57.053 1+0 records out 00:04:57.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477767 s, 219 MB/s 00:04:57.053 11:01:08 -- spdk/autotest.sh@116 -- # sync 00:04:57.621 11:01:08 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.621 11:01:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.621 11:01:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:59.526 11:01:10 -- spdk/autotest.sh@122 -- # uname -s 00:04:59.526 11:01:10 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:59.526 11:01:10 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:59.526 11:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.526 11:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.526 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:04:59.526 ************************************ 00:04:59.526 START TEST setup.sh 00:04:59.526 ************************************ 00:04:59.526 11:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:59.526 * Looking for test storage... 00:04:59.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.526 11:01:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.785 11:01:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.785 11:01:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.785 11:01:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.785 11:01:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.785 11:01:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.785 11:01:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.785 11:01:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.785 11:01:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.785 11:01:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.785 11:01:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.785 11:01:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.785 11:01:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.785 11:01:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.785 11:01:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.785 11:01:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.785 11:01:10 -- scripts/common.sh@344 -- # : 1 00:04:59.785 11:01:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.785 11:01:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.785 11:01:10 -- scripts/common.sh@364 -- # decimal 1 00:04:59.785 11:01:10 -- scripts/common.sh@352 -- # local d=1 00:04:59.785 11:01:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.785 11:01:10 -- scripts/common.sh@354 -- # echo 1 00:04:59.785 11:01:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.785 11:01:10 -- scripts/common.sh@365 -- # decimal 2 00:04:59.785 11:01:10 -- scripts/common.sh@352 -- # local d=2 00:04:59.785 11:01:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.785 11:01:10 -- scripts/common.sh@354 -- # echo 2 00:04:59.785 11:01:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.785 11:01:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.785 11:01:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.785 11:01:10 -- scripts/common.sh@367 -- # return 0 00:04:59.785 11:01:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.785 11:01:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.785 --rc genhtml_branch_coverage=1 00:04:59.785 --rc genhtml_function_coverage=1 00:04:59.785 --rc genhtml_legend=1 00:04:59.785 --rc geninfo_all_blocks=1 00:04:59.785 --rc geninfo_unexecuted_blocks=1 00:04:59.785 00:04:59.785 ' 00:04:59.785 11:01:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.785 --rc genhtml_branch_coverage=1 00:04:59.785 --rc genhtml_function_coverage=1 00:04:59.785 --rc genhtml_legend=1 00:04:59.785 --rc geninfo_all_blocks=1 00:04:59.785 --rc geninfo_unexecuted_blocks=1 00:04:59.785 00:04:59.785 ' 00:04:59.785 11:01:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.785 --rc genhtml_branch_coverage=1 00:04:59.786 --rc genhtml_function_coverage=1 00:04:59.786 --rc genhtml_legend=1 00:04:59.786 --rc geninfo_all_blocks=1 00:04:59.786 --rc geninfo_unexecuted_blocks=1 00:04:59.786 00:04:59.786 ' 00:04:59.786 11:01:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.786 --rc genhtml_branch_coverage=1 00:04:59.786 --rc genhtml_function_coverage=1 00:04:59.786 --rc genhtml_legend=1 00:04:59.786 --rc geninfo_all_blocks=1 00:04:59.786 --rc geninfo_unexecuted_blocks=1 00:04:59.786 00:04:59.786 ' 00:04:59.786 11:01:10 -- setup/test-setup.sh@10 -- # uname -s 00:04:59.786 11:01:10 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:59.786 11:01:10 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:59.786 11:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.786 11:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.786 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:04:59.786 ************************************ 00:04:59.786 START TEST acl 00:04:59.786 ************************************ 00:04:59.786 11:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:59.786 * Looking for test storage... 00:04:59.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.786 11:01:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.786 11:01:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.786 11:01:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.058 11:01:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.058 11:01:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.058 11:01:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.058 11:01:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.058 11:01:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.058 11:01:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.058 11:01:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.058 11:01:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.058 11:01:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.058 11:01:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.058 11:01:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.058 11:01:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.058 11:01:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.058 11:01:10 -- scripts/common.sh@344 -- # : 1 00:05:00.058 11:01:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.058 11:01:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.058 11:01:10 -- scripts/common.sh@364 -- # decimal 1 00:05:00.058 11:01:10 -- scripts/common.sh@352 -- # local d=1 00:05:00.058 11:01:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.058 11:01:10 -- scripts/common.sh@354 -- # echo 1 00:05:00.058 11:01:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.058 11:01:10 -- scripts/common.sh@365 -- # decimal 2 00:05:00.058 11:01:10 -- scripts/common.sh@352 -- # local d=2 00:05:00.058 11:01:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.058 11:01:10 -- scripts/common.sh@354 -- # echo 2 00:05:00.058 11:01:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.058 11:01:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.058 11:01:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.058 11:01:10 -- scripts/common.sh@367 -- # return 0 00:05:00.058 11:01:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.058 11:01:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.058 --rc genhtml_branch_coverage=1 00:05:00.058 --rc genhtml_function_coverage=1 00:05:00.058 --rc genhtml_legend=1 00:05:00.058 --rc geninfo_all_blocks=1 00:05:00.058 --rc geninfo_unexecuted_blocks=1 00:05:00.058 00:05:00.058 ' 00:05:00.058 11:01:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.058 --rc genhtml_branch_coverage=1 00:05:00.058 --rc genhtml_function_coverage=1 00:05:00.058 --rc genhtml_legend=1 00:05:00.058 --rc geninfo_all_blocks=1 00:05:00.058 --rc geninfo_unexecuted_blocks=1 00:05:00.058 00:05:00.058 ' 00:05:00.058 11:01:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.058 --rc genhtml_branch_coverage=1 00:05:00.058 --rc genhtml_function_coverage=1 00:05:00.058 --rc genhtml_legend=1 00:05:00.058 --rc geninfo_all_blocks=1 00:05:00.058 --rc geninfo_unexecuted_blocks=1 00:05:00.058 00:05:00.058 ' 00:05:00.058 11:01:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.058 --rc genhtml_branch_coverage=1 00:05:00.058 --rc genhtml_function_coverage=1 00:05:00.058 --rc genhtml_legend=1 00:05:00.058 --rc geninfo_all_blocks=1 00:05:00.058 --rc geninfo_unexecuted_blocks=1 00:05:00.058 00:05:00.058 ' 00:05:00.058 11:01:10 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:00.058 11:01:10 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:00.058 11:01:10 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:00.058 11:01:10 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:00.058 11:01:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.058 11:01:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:00.058 11:01:10 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:00.058 11:01:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.058 11:01:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:00.058 11:01:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:00.058 11:01:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.058 11:01:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:00.058 11:01:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:00.058 11:01:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.058 11:01:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:00.058 11:01:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:00.058 11:01:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.058 11:01:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.058 11:01:10 -- setup/acl.sh@12 -- # devs=() 00:05:00.058 11:01:10 -- setup/acl.sh@12 -- # declare -a devs 00:05:00.058 11:01:10 -- setup/acl.sh@13 -- # drivers=() 00:05:00.058 11:01:10 -- setup/acl.sh@13 -- # declare -A drivers 00:05:00.058 11:01:10 -- setup/acl.sh@51 -- # setup reset 00:05:00.058 11:01:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.058 11:01:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.628 11:01:11 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:00.628 11:01:11 -- setup/acl.sh@16 -- # local dev driver 00:05:00.628 11:01:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.629 11:01:11 -- setup/acl.sh@15 -- # setup output status 00:05:00.629 11:01:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.629 11:01:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:00.887 Hugepages 00:05:00.887 node hugesize free / total 00:05:00.887 11:01:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.887 11:01:11 -- setup/acl.sh@19 -- # continue 00:05:00.887 11:01:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.887 00:05:00.887 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.887 11:01:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.887 11:01:11 -- setup/acl.sh@19 -- # continue 00:05:00.888 11:01:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.888 11:01:11 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:00.888 11:01:11 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:00.888 11:01:11 -- setup/acl.sh@20 -- # continue 00:05:00.888 11:01:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.146 11:01:12 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:01.146 11:01:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:01.146 11:01:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:01.146 11:01:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:01.146 11:01:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:01.146 11:01:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.146 11:01:12 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:01.146 11:01:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:01.146 11:01:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:01.146 11:01:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:01.146 11:01:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:01.146 11:01:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.146 11:01:12 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:01.146 11:01:12 -- setup/acl.sh@54 -- # run_test denied denied 00:05:01.146 11:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.147 11:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.147 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:01.147 ************************************ 00:05:01.147 START TEST denied 00:05:01.147 ************************************ 00:05:01.147 11:01:12 -- common/autotest_common.sh@1114 -- # denied 00:05:01.147 11:01:12 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:01.147 11:01:12 -- setup/acl.sh@38 -- # setup output config 00:05:01.147 11:01:12 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:01.147 11:01:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.147 11:01:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.085 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:02.085 11:01:13 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:02.085 11:01:13 -- setup/acl.sh@28 -- # local dev driver 00:05:02.085 11:01:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:02.085 11:01:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:02.085 11:01:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:02.085 11:01:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:02.085 11:01:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:02.085 11:01:13 -- setup/acl.sh@41 -- # setup reset 00:05:02.085 11:01:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.085 11:01:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.653 00:05:02.653 real 0m1.500s 00:05:02.653 user 0m0.593s 00:05:02.653 sys 0m0.856s 00:05:02.653 11:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.653 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.653 ************************************ 00:05:02.653 END TEST denied 00:05:02.653 ************************************ 00:05:02.653 11:01:13 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:02.653 11:01:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.653 11:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.653 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.653 ************************************ 00:05:02.653 START TEST allowed 00:05:02.653 ************************************ 00:05:02.653 11:01:13 -- common/autotest_common.sh@1114 -- # allowed 00:05:02.653 11:01:13 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:02.653 11:01:13 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:02.653 11:01:13 -- setup/acl.sh@45 -- # setup output config 00:05:02.653 11:01:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.653 11:01:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.589 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.590 11:01:14 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:03.590 11:01:14 -- setup/acl.sh@28 -- # local dev driver 00:05:03.590 11:01:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:03.590 11:01:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:03.590 11:01:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:03.590 11:01:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:03.590 11:01:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:03.590 11:01:14 -- setup/acl.sh@48 -- # setup reset 00:05:03.590 11:01:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.590 11:01:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.172 00:05:04.172 real 0m1.567s 00:05:04.172 user 0m0.700s 00:05:04.172 sys 0m0.862s 00:05:04.172 11:01:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.172 ************************************ 00:05:04.172 END TEST allowed 00:05:04.172 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.172 ************************************ 00:05:04.172 00:05:04.172 real 0m4.491s 00:05:04.172 user 0m1.992s 00:05:04.172 sys 0m2.472s 00:05:04.172 11:01:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.172 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.172 ************************************ 00:05:04.172 END TEST acl 00:05:04.172 ************************************ 00:05:04.432 11:01:15 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:04.432 11:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.432 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.432 ************************************ 00:05:04.432 START TEST hugepages 00:05:04.432 ************************************ 00:05:04.432 11:01:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:04.432 * Looking for test storage... 00:05:04.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.432 11:01:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.432 11:01:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.432 11:01:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.432 11:01:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.432 11:01:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.432 11:01:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.432 11:01:15 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.432 11:01:15 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.432 11:01:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.432 11:01:15 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.432 11:01:15 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.432 11:01:15 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.432 11:01:15 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.432 11:01:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.432 11:01:15 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.432 11:01:15 -- scripts/common.sh@344 -- # : 1 00:05:04.432 11:01:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.432 11:01:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.432 11:01:15 -- scripts/common.sh@364 -- # decimal 1 00:05:04.432 11:01:15 -- scripts/common.sh@352 -- # local d=1 00:05:04.432 11:01:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.432 11:01:15 -- scripts/common.sh@354 -- # echo 1 00:05:04.432 11:01:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.432 11:01:15 -- scripts/common.sh@365 -- # decimal 2 00:05:04.432 11:01:15 -- scripts/common.sh@352 -- # local d=2 00:05:04.432 11:01:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.432 11:01:15 -- scripts/common.sh@354 -- # echo 2 00:05:04.432 11:01:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.432 11:01:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.432 11:01:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.432 11:01:15 -- scripts/common.sh@367 -- # return 0 00:05:04.432 11:01:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.432 --rc genhtml_branch_coverage=1 00:05:04.432 --rc genhtml_function_coverage=1 00:05:04.432 --rc genhtml_legend=1 00:05:04.432 --rc geninfo_all_blocks=1 00:05:04.432 --rc geninfo_unexecuted_blocks=1 00:05:04.432 00:05:04.432 ' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.432 --rc genhtml_branch_coverage=1 00:05:04.432 --rc genhtml_function_coverage=1 00:05:04.432 --rc genhtml_legend=1 00:05:04.432 --rc geninfo_all_blocks=1 00:05:04.432 --rc geninfo_unexecuted_blocks=1 00:05:04.432 00:05:04.432 ' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.432 --rc genhtml_branch_coverage=1 00:05:04.432 --rc genhtml_function_coverage=1 00:05:04.432 --rc genhtml_legend=1 00:05:04.432 --rc geninfo_all_blocks=1 00:05:04.432 --rc geninfo_unexecuted_blocks=1 00:05:04.432 00:05:04.432 ' 00:05:04.432 11:01:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.432 --rc genhtml_branch_coverage=1 00:05:04.432 --rc genhtml_function_coverage=1 00:05:04.432 --rc genhtml_legend=1 00:05:04.432 --rc geninfo_all_blocks=1 00:05:04.432 --rc geninfo_unexecuted_blocks=1 00:05:04.432 00:05:04.432 ' 00:05:04.432 11:01:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:04.432 11:01:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:04.432 11:01:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:04.432 11:01:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:04.432 11:01:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:04.432 11:01:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:04.432 11:01:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:04.432 11:01:15 -- setup/common.sh@18 -- # local node= 00:05:04.432 11:01:15 -- setup/common.sh@19 -- # local var val 00:05:04.432 11:01:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.432 11:01:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.432 11:01:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.432 11:01:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.432 11:01:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.432 11:01:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4556900 kB' 'MemAvailable: 7347528 kB' 'Buffers: 2684 kB' 'Cached: 2994028 kB' 'SwapCached: 0 kB' 'Active: 455248 kB' 'Inactive: 2658372 kB' 'Active(anon): 127420 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 51088 kB' 'Shmem: 10512 kB' 'KReclaimable: 82752 kB' 'Slab: 183016 kB' 'SReclaimable: 82752 kB' 'SUnreclaim: 100264 kB' 'KernelStack: 6656 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 319680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.432 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.432 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.433 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.433 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.691 11:01:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.691 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.691 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.692 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.692 11:01:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.692 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.692 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.692 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.692 11:01:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.692 11:01:15 -- setup/common.sh@32 -- # continue 00:05:04.692 11:01:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.692 11:01:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.692 11:01:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:04.692 11:01:15 -- setup/common.sh@33 -- # echo 2048 00:05:04.692 11:01:15 -- setup/common.sh@33 -- # return 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:04.692 11:01:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:04.692 11:01:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:04.692 11:01:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:04.692 11:01:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:04.692 11:01:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:04.692 11:01:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:04.692 11:01:15 -- setup/hugepages.sh@207 -- # get_nodes 00:05:04.692 11:01:15 -- setup/hugepages.sh@27 -- # local node 00:05:04.692 11:01:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.692 11:01:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:04.692 11:01:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.692 11:01:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.692 11:01:15 -- setup/hugepages.sh@208 -- # clear_hp 00:05:04.692 11:01:15 -- setup/hugepages.sh@37 -- # local node hp 00:05:04.692 11:01:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:04.692 11:01:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:04.692 11:01:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:04.692 11:01:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:04.692 11:01:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:04.692 11:01:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:04.692 11:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.692 11:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.692 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.692 ************************************ 00:05:04.692 START TEST default_setup 00:05:04.692 ************************************ 00:05:04.692 11:01:15 -- common/autotest_common.sh@1114 -- # default_setup 00:05:04.692 11:01:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:04.692 11:01:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:04.692 11:01:15 -- setup/hugepages.sh@51 -- # shift 00:05:04.692 11:01:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:04.692 11:01:15 -- setup/hugepages.sh@52 -- # local node_ids 00:05:04.692 11:01:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.692 11:01:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:04.692 11:01:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:04.692 11:01:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.692 11:01:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:04.692 11:01:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.692 11:01:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.692 11:01:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.692 11:01:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:04.692 11:01:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:04.692 11:01:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:04.692 11:01:15 -- setup/hugepages.sh@73 -- # return 0 00:05:04.692 11:01:15 -- setup/hugepages.sh@137 -- # setup output 00:05:04.692 11:01:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.692 11:01:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.259 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.521 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.521 11:01:16 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:05.521 11:01:16 -- setup/hugepages.sh@89 -- # local node 00:05:05.521 11:01:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.521 11:01:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.521 11:01:16 -- setup/hugepages.sh@92 -- # local surp 00:05:05.521 11:01:16 -- setup/hugepages.sh@93 -- # local resv 00:05:05.521 11:01:16 -- setup/hugepages.sh@94 -- # local anon 00:05:05.521 11:01:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.521 11:01:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.521 11:01:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.521 11:01:16 -- setup/common.sh@18 -- # local node= 00:05:05.521 11:01:16 -- setup/common.sh@19 -- # local var val 00:05:05.521 11:01:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.521 11:01:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.521 11:01:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.521 11:01:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.521 11:01:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.521 11:01:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.521 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.521 11:01:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656124 kB' 'MemAvailable: 9446660 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456840 kB' 'Inactive: 2658376 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120128 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182860 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100304 kB' 'KernelStack: 6592 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:05.521 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.521 11:01:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.521 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.521 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.521 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.522 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.522 11:01:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.523 11:01:16 -- setup/common.sh@33 -- # echo 0 00:05:05.523 11:01:16 -- setup/common.sh@33 -- # return 0 00:05:05.523 11:01:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:05.523 11:01:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.523 11:01:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.523 11:01:16 -- setup/common.sh@18 -- # local node= 00:05:05.523 11:01:16 -- setup/common.sh@19 -- # local var val 00:05:05.523 11:01:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.523 11:01:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.523 11:01:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.523 11:01:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.523 11:01:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.523 11:01:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656124 kB' 'MemAvailable: 9446660 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 2658376 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182856 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100300 kB' 'KernelStack: 6624 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.523 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.523 11:01:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.524 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.524 11:01:16 -- setup/common.sh@33 -- # echo 0 00:05:05.524 11:01:16 -- setup/common.sh@33 -- # return 0 00:05:05.524 11:01:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:05.524 11:01:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.524 11:01:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.524 11:01:16 -- setup/common.sh@18 -- # local node= 00:05:05.524 11:01:16 -- setup/common.sh@19 -- # local var val 00:05:05.524 11:01:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.524 11:01:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.524 11:01:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.524 11:01:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.524 11:01:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.524 11:01:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.524 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656124 kB' 'MemAvailable: 9446660 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456608 kB' 'Inactive: 2658376 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182856 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100300 kB' 'KernelStack: 6624 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.525 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.525 11:01:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.526 11:01:16 -- setup/common.sh@33 -- # echo 0 00:05:05.526 11:01:16 -- setup/common.sh@33 -- # return 0 00:05:05.526 11:01:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.526 nr_hugepages=1024 00:05:05.526 11:01:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.526 resv_hugepages=0 00:05:05.526 surplus_hugepages=0 00:05:05.526 anon_hugepages=0 00:05:05.526 11:01:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.526 11:01:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.526 11:01:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.526 11:01:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.526 11:01:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.526 11:01:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.526 11:01:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.526 11:01:16 -- setup/common.sh@18 -- # local node= 00:05:05.526 11:01:16 -- setup/common.sh@19 -- # local var val 00:05:05.526 11:01:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.526 11:01:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.526 11:01:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.526 11:01:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.526 11:01:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.526 11:01:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656124 kB' 'MemAvailable: 9446660 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 2658376 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182844 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100288 kB' 'KernelStack: 6592 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.526 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.526 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.527 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.527 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.528 11:01:16 -- setup/common.sh@33 -- # echo 1024 00:05:05.528 11:01:16 -- setup/common.sh@33 -- # return 0 00:05:05.528 11:01:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.528 11:01:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.528 11:01:16 -- setup/hugepages.sh@27 -- # local node 00:05:05.528 11:01:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.528 11:01:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.528 11:01:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.528 11:01:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.528 11:01:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.528 11:01:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.528 11:01:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.528 11:01:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.528 11:01:16 -- setup/common.sh@18 -- # local node=0 00:05:05.528 11:01:16 -- setup/common.sh@19 -- # local var val 00:05:05.528 11:01:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.528 11:01:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.528 11:01:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.528 11:01:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.528 11:01:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.528 11:01:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6655900 kB' 'MemUsed: 5583216 kB' 'SwapCached: 0 kB' 'Active: 456696 kB' 'Inactive: 2658376 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2996704 kB' 'Mapped: 50812 kB' 'AnonPages: 119976 kB' 'Shmem: 10488 kB' 'KernelStack: 6608 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182876 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.528 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.528 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.529 11:01:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.529 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.529 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.788 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.788 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # continue 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.789 11:01:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.789 11:01:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.789 11:01:16 -- setup/common.sh@33 -- # echo 0 00:05:05.789 11:01:16 -- setup/common.sh@33 -- # return 0 00:05:05.789 node0=1024 expecting 1024 00:05:05.789 11:01:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.789 11:01:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.789 11:01:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.789 11:01:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.789 11:01:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.789 11:01:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.789 00:05:05.789 real 0m1.067s 00:05:05.789 user 0m0.508s 00:05:05.789 sys 0m0.466s 00:05:05.789 ************************************ 00:05:05.789 END TEST default_setup 00:05:05.789 ************************************ 00:05:05.789 11:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.789 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.789 11:01:16 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:05.789 11:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.789 11:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.789 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.789 ************************************ 00:05:05.789 START TEST per_node_1G_alloc 00:05:05.789 ************************************ 00:05:05.789 11:01:16 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:05.789 11:01:16 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:05.789 11:01:16 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:05.789 11:01:16 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.789 11:01:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.789 11:01:16 -- setup/hugepages.sh@51 -- # shift 00:05:05.789 11:01:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.789 11:01:16 -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.789 11:01:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.789 11:01:16 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.789 11:01:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.789 11:01:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.789 11:01:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.789 11:01:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.789 11:01:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.789 11:01:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.789 11:01:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.789 11:01:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.789 11:01:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.789 11:01:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:05.789 11:01:16 -- setup/hugepages.sh@73 -- # return 0 00:05:05.789 11:01:16 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:05.789 11:01:16 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:05.789 11:01:16 -- setup/hugepages.sh@146 -- # setup output 00:05:05.789 11:01:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.789 11:01:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.050 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.050 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.050 11:01:17 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:06.050 11:01:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:06.050 11:01:17 -- setup/hugepages.sh@89 -- # local node 00:05:06.050 11:01:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.050 11:01:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.050 11:01:17 -- setup/hugepages.sh@92 -- # local surp 00:05:06.050 11:01:17 -- setup/hugepages.sh@93 -- # local resv 00:05:06.050 11:01:17 -- setup/hugepages.sh@94 -- # local anon 00:05:06.050 11:01:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.050 11:01:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.050 11:01:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.050 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.050 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.050 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.050 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.050 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.050 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.050 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.050 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7705324 kB' 'MemAvailable: 10495872 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456484 kB' 'Inactive: 2658388 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120000 kB' 'Mapped: 50984 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182872 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100316 kB' 'KernelStack: 6632 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.050 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.050 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.051 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.051 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.051 11:01:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.051 11:01:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.051 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.051 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.051 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.051 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.051 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.051 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.051 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.051 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.051 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7705324 kB' 'MemAvailable: 10495872 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456596 kB' 'Inactive: 2658388 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119852 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182888 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100332 kB' 'KernelStack: 6608 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.051 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.051 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.052 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.052 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.314 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.314 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.314 11:01:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.314 11:01:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.314 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.314 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.314 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.314 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.314 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.314 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.314 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.314 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.314 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7705600 kB' 'MemAvailable: 10496148 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456848 kB' 'Inactive: 2658388 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120100 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182872 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100316 kB' 'KernelStack: 6608 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.314 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.314 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.315 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.315 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.315 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.315 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.315 nr_hugepages=512 00:05:06.315 resv_hugepages=0 00:05:06.315 surplus_hugepages=0 00:05:06.315 anon_hugepages=0 00:05:06.315 11:01:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.315 11:01:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:06.315 11:01:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.315 11:01:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.315 11:01:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.315 11:01:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:06.315 11:01:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:06.315 11:01:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.315 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.315 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.315 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.316 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.316 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.316 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.316 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.316 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.316 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7705668 kB' 'MemAvailable: 10496216 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456384 kB' 'Inactive: 2658388 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119948 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182868 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100312 kB' 'KernelStack: 6624 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.316 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.316 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.317 11:01:17 -- setup/common.sh@33 -- # echo 512 00:05:06.317 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.317 11:01:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:06.317 11:01:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.317 11:01:17 -- setup/hugepages.sh@27 -- # local node 00:05:06.317 11:01:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.317 11:01:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.317 11:01:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.317 11:01:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.317 11:01:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.317 11:01:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.317 11:01:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.317 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.317 11:01:17 -- setup/common.sh@18 -- # local node=0 00:05:06.317 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.317 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.317 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.317 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.317 11:01:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.317 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.317 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7706336 kB' 'MemUsed: 4532780 kB' 'SwapCached: 0 kB' 'Active: 456688 kB' 'Inactive: 2658388 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2996704 kB' 'Mapped: 50812 kB' 'AnonPages: 119920 kB' 'Shmem: 10488 kB' 'KernelStack: 6608 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182856 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.317 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.317 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.318 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.318 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.318 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.318 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.318 11:01:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.318 node0=512 expecting 512 00:05:06.318 ************************************ 00:05:06.318 END TEST per_node_1G_alloc 00:05:06.318 ************************************ 00:05:06.318 11:01:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.318 11:01:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.318 11:01:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.318 11:01:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.318 00:05:06.318 real 0m0.579s 00:05:06.318 user 0m0.256s 00:05:06.318 sys 0m0.323s 00:05:06.318 11:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.318 11:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:06.318 11:01:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:06.318 11:01:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.318 11:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.318 11:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:06.318 ************************************ 00:05:06.318 START TEST even_2G_alloc 00:05:06.318 ************************************ 00:05:06.318 11:01:17 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:06.318 11:01:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:06.318 11:01:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:06.318 11:01:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:06.318 11:01:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.318 11:01:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.318 11:01:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.318 11:01:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.318 11:01:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:06.318 11:01:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.318 11:01:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.318 11:01:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:06.318 11:01:17 -- setup/hugepages.sh@83 -- # : 0 00:05:06.318 11:01:17 -- setup/hugepages.sh@84 -- # : 0 00:05:06.318 11:01:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.318 11:01:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:06.318 11:01:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:06.318 11:01:17 -- setup/hugepages.sh@153 -- # setup output 00:05:06.318 11:01:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.318 11:01:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.840 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.840 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.840 11:01:17 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:06.840 11:01:17 -- setup/hugepages.sh@89 -- # local node 00:05:06.840 11:01:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.840 11:01:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.840 11:01:17 -- setup/hugepages.sh@92 -- # local surp 00:05:06.840 11:01:17 -- setup/hugepages.sh@93 -- # local resv 00:05:06.840 11:01:17 -- setup/hugepages.sh@94 -- # local anon 00:05:06.840 11:01:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.840 11:01:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.840 11:01:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.840 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.840 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.840 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.840 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.840 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.840 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.840 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.840 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6658372 kB' 'MemAvailable: 9448920 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456840 kB' 'Inactive: 2658388 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120140 kB' 'Mapped: 50912 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182848 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100292 kB' 'KernelStack: 6616 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.840 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.840 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.841 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.841 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.841 11:01:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.841 11:01:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.841 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.841 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.841 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.841 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.841 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.841 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.841 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.841 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.841 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6658372 kB' 'MemAvailable: 9448920 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 2658388 kB' 'Active(anon): 128828 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50912 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182840 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100284 kB' 'KernelStack: 6584 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.841 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.841 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.842 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.842 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.842 11:01:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.842 11:01:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.842 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.842 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.842 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.842 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.842 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.842 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.842 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.842 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.842 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6658120 kB' 'MemAvailable: 9448668 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456416 kB' 'Inactive: 2658388 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119948 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182852 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100296 kB' 'KernelStack: 6624 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.842 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.842 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.843 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.843 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.844 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.844 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.844 nr_hugepages=1024 00:05:06.844 11:01:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.844 11:01:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.844 resv_hugepages=0 00:05:06.844 11:01:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.844 surplus_hugepages=0 00:05:06.844 anon_hugepages=0 00:05:06.844 11:01:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.844 11:01:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.844 11:01:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.844 11:01:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.844 11:01:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.844 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.844 11:01:17 -- setup/common.sh@18 -- # local node= 00:05:06.844 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.844 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.844 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.844 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.844 11:01:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.844 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.844 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6658412 kB' 'MemAvailable: 9448960 kB' 'Buffers: 2684 kB' 'Cached: 2994020 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 2658388 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50864 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182860 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100304 kB' 'KernelStack: 6576 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.844 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.844 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.845 11:01:17 -- setup/common.sh@33 -- # echo 1024 00:05:06.845 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.845 11:01:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.845 11:01:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.845 11:01:17 -- setup/hugepages.sh@27 -- # local node 00:05:06.845 11:01:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.845 11:01:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.845 11:01:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.845 11:01:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.845 11:01:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.845 11:01:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.845 11:01:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.845 11:01:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.845 11:01:17 -- setup/common.sh@18 -- # local node=0 00:05:06.845 11:01:17 -- setup/common.sh@19 -- # local var val 00:05:06.845 11:01:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.845 11:01:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.845 11:01:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.845 11:01:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.845 11:01:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.845 11:01:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6658412 kB' 'MemUsed: 5580704 kB' 'SwapCached: 0 kB' 'Active: 456408 kB' 'Inactive: 2658392 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2996708 kB' 'Mapped: 50812 kB' 'AnonPages: 119708 kB' 'Shmem: 10488 kB' 'KernelStack: 6560 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182864 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.845 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.845 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # continue 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.846 11:01:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.846 11:01:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.846 11:01:17 -- setup/common.sh@33 -- # echo 0 00:05:06.846 11:01:17 -- setup/common.sh@33 -- # return 0 00:05:06.846 11:01:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.846 11:01:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.846 11:01:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.846 11:01:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.846 node0=1024 expecting 1024 00:05:06.846 11:01:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.846 11:01:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.846 00:05:06.846 real 0m0.584s 00:05:06.846 user 0m0.298s 00:05:06.846 sys 0m0.292s 00:05:06.846 ************************************ 00:05:06.846 END TEST even_2G_alloc 00:05:06.846 ************************************ 00:05:06.846 11:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.846 11:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:07.105 11:01:17 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:07.105 11:01:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.105 11:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.105 11:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:07.105 ************************************ 00:05:07.105 START TEST odd_alloc 00:05:07.105 ************************************ 00:05:07.105 11:01:18 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:07.105 11:01:18 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:07.105 11:01:18 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:07.105 11:01:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:07.105 11:01:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.105 11:01:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.105 11:01:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.105 11:01:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:07.105 11:01:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.105 11:01:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.105 11:01:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.105 11:01:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:07.105 11:01:18 -- setup/hugepages.sh@83 -- # : 0 00:05:07.105 11:01:18 -- setup/hugepages.sh@84 -- # : 0 00:05:07.105 11:01:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.105 11:01:18 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:07.105 11:01:18 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:07.105 11:01:18 -- setup/hugepages.sh@160 -- # setup output 00:05:07.105 11:01:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.105 11:01:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.364 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.364 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.364 11:01:18 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.364 11:01:18 -- setup/hugepages.sh@89 -- # local node 00:05:07.364 11:01:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.364 11:01:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.364 11:01:18 -- setup/hugepages.sh@92 -- # local surp 00:05:07.364 11:01:18 -- setup/hugepages.sh@93 -- # local resv 00:05:07.364 11:01:18 -- setup/hugepages.sh@94 -- # local anon 00:05:07.364 11:01:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.364 11:01:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.364 11:01:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.364 11:01:18 -- setup/common.sh@18 -- # local node= 00:05:07.364 11:01:18 -- setup/common.sh@19 -- # local var val 00:05:07.364 11:01:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.364 11:01:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.364 11:01:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.364 11:01:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.364 11:01:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.364 11:01:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656512 kB' 'MemAvailable: 9447064 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456472 kB' 'Inactive: 2658392 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182868 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100312 kB' 'KernelStack: 6600 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.364 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.364 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.365 11:01:18 -- setup/common.sh@33 -- # echo 0 00:05:07.365 11:01:18 -- setup/common.sh@33 -- # return 0 00:05:07.365 11:01:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.365 11:01:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.365 11:01:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.365 11:01:18 -- setup/common.sh@18 -- # local node= 00:05:07.365 11:01:18 -- setup/common.sh@19 -- # local var val 00:05:07.365 11:01:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.365 11:01:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.365 11:01:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.365 11:01:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.365 11:01:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.365 11:01:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6656812 kB' 'MemAvailable: 9447364 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456428 kB' 'Inactive: 2658392 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182892 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100336 kB' 'KernelStack: 6624 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.365 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.365 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.365 11:01:18 -- setup/common.sh@33 -- # echo 0 00:05:07.365 11:01:18 -- setup/common.sh@33 -- # return 0 00:05:07.365 11:01:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.365 11:01:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.365 11:01:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.365 11:01:18 -- setup/common.sh@18 -- # local node= 00:05:07.365 11:01:18 -- setup/common.sh@19 -- # local var val 00:05:07.365 11:01:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.365 11:01:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.366 11:01:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.366 11:01:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.366 11:01:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.366 11:01:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6657316 kB' 'MemAvailable: 9447868 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456620 kB' 'Inactive: 2658392 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182864 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100308 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.366 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.366 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.627 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.627 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.628 11:01:18 -- setup/common.sh@33 -- # echo 0 00:05:07.628 11:01:18 -- setup/common.sh@33 -- # return 0 00:05:07.628 nr_hugepages=1025 00:05:07.628 resv_hugepages=0 00:05:07.628 surplus_hugepages=0 00:05:07.628 anon_hugepages=0 00:05:07.628 11:01:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.628 11:01:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.628 11:01:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.628 11:01:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.628 11:01:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.628 11:01:18 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.628 11:01:18 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.628 11:01:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.628 11:01:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.628 11:01:18 -- setup/common.sh@18 -- # local node= 00:05:07.628 11:01:18 -- setup/common.sh@19 -- # local var val 00:05:07.628 11:01:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.628 11:01:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.628 11:01:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.628 11:01:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.628 11:01:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.628 11:01:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6657316 kB' 'MemAvailable: 9447868 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456360 kB' 'Inactive: 2658392 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182856 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100300 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.628 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.628 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.629 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.629 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.629 11:01:18 -- setup/common.sh@33 -- # echo 1025 00:05:07.629 11:01:18 -- setup/common.sh@33 -- # return 0 00:05:07.629 11:01:18 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.629 11:01:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.629 11:01:18 -- setup/hugepages.sh@27 -- # local node 00:05:07.629 11:01:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.629 11:01:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:07.629 11:01:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.629 11:01:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.629 11:01:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.629 11:01:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.629 11:01:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.629 11:01:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.629 11:01:18 -- setup/common.sh@18 -- # local node=0 00:05:07.629 11:01:18 -- setup/common.sh@19 -- # local var val 00:05:07.630 11:01:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.630 11:01:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.630 11:01:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.630 11:01:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.630 11:01:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.630 11:01:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6657316 kB' 'MemUsed: 5581800 kB' 'SwapCached: 0 kB' 'Active: 456436 kB' 'Inactive: 2658392 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2996708 kB' 'Mapped: 50812 kB' 'AnonPages: 119708 kB' 'Shmem: 10488 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182856 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # continue 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.630 11:01:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.630 11:01:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.630 11:01:18 -- setup/common.sh@33 -- # echo 0 00:05:07.630 11:01:18 -- setup/common.sh@33 -- # return 0 00:05:07.630 11:01:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.630 11:01:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.630 11:01:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.630 11:01:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.630 11:01:18 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:07.630 node0=1025 expecting 1025 00:05:07.630 11:01:18 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:07.630 00:05:07.630 real 0m0.615s 00:05:07.630 user 0m0.290s 00:05:07.630 sys 0m0.329s 00:05:07.630 11:01:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.630 ************************************ 00:05:07.630 END TEST odd_alloc 00:05:07.630 ************************************ 00:05:07.630 11:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 11:01:18 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.630 11:01:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.630 11:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.630 11:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 ************************************ 00:05:07.630 START TEST custom_alloc 00:05:07.630 ************************************ 00:05:07.630 11:01:18 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:07.630 11:01:18 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.630 11:01:18 -- setup/hugepages.sh@169 -- # local node 00:05:07.631 11:01:18 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.631 11:01:18 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.631 11:01:18 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.631 11:01:18 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.631 11:01:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.631 11:01:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.631 11:01:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.631 11:01:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.631 11:01:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.631 11:01:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.631 11:01:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.631 11:01:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@83 -- # : 0 00:05:07.631 11:01:18 -- setup/hugepages.sh@84 -- # : 0 00:05:07.631 11:01:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.631 11:01:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.631 11:01:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.631 11:01:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.631 11:01:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.631 11:01:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.631 11:01:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.631 11:01:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.631 11:01:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.631 11:01:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.631 11:01:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.631 11:01:18 -- setup/hugepages.sh@78 -- # return 0 00:05:07.631 11:01:18 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:07.631 11:01:18 -- setup/hugepages.sh@187 -- # setup output 00:05:07.631 11:01:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.631 11:01:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.171 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.171 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.171 11:01:19 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:08.171 11:01:19 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:08.172 11:01:19 -- setup/hugepages.sh@89 -- # local node 00:05:08.172 11:01:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.172 11:01:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.172 11:01:19 -- setup/hugepages.sh@92 -- # local surp 00:05:08.172 11:01:19 -- setup/hugepages.sh@93 -- # local resv 00:05:08.172 11:01:19 -- setup/hugepages.sh@94 -- # local anon 00:05:08.172 11:01:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.172 11:01:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.172 11:01:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.172 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.172 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.172 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.172 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.172 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.172 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.172 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.172 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7710780 kB' 'MemAvailable: 10501332 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456632 kB' 'Inactive: 2658392 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120192 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182828 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100272 kB' 'KernelStack: 6600 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.172 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.172 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.173 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.173 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.173 11:01:19 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.173 11:01:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.173 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.173 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.173 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.173 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.173 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.173 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.173 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.173 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.173 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7710528 kB' 'MemAvailable: 10501080 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456372 kB' 'Inactive: 2658392 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182836 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100280 kB' 'KernelStack: 6608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.173 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.173 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.174 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.174 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.174 11:01:19 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.174 11:01:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.174 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.174 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.174 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.174 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.174 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.174 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.174 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.174 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.174 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7710528 kB' 'MemAvailable: 10501080 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456640 kB' 'Inactive: 2658392 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182836 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100280 kB' 'KernelStack: 6608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.174 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.174 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.175 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.175 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.176 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.176 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.176 11:01:19 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.176 nr_hugepages=512 00:05:08.176 11:01:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:08.176 resv_hugepages=0 00:05:08.176 surplus_hugepages=0 00:05:08.176 anon_hugepages=0 00:05:08.176 11:01:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.176 11:01:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.176 11:01:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.176 11:01:19 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:08.176 11:01:19 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:08.176 11:01:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.176 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.176 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.176 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.176 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.176 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.176 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.176 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.176 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.176 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7710852 kB' 'MemAvailable: 10501404 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456708 kB' 'Inactive: 2658392 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119744 kB' 'Mapped: 50812 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182828 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100272 kB' 'KernelStack: 6608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.176 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.176 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.177 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.177 11:01:19 -- setup/common.sh@33 -- # echo 512 00:05:08.177 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.177 11:01:19 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:08.177 11:01:19 -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.177 11:01:19 -- setup/hugepages.sh@27 -- # local node 00:05:08.177 11:01:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.177 11:01:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.177 11:01:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.177 11:01:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.177 11:01:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.177 11:01:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.177 11:01:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.177 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.177 11:01:19 -- setup/common.sh@18 -- # local node=0 00:05:08.177 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.177 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.177 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.177 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.177 11:01:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.177 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.177 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.177 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7710852 kB' 'MemUsed: 4528264 kB' 'SwapCached: 0 kB' 'Active: 456760 kB' 'Inactive: 2658392 kB' 'Active(anon): 128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2996708 kB' 'Mapped: 50812 kB' 'AnonPages: 120044 kB' 'Shmem: 10488 kB' 'KernelStack: 6592 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182828 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.178 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.178 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.178 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.178 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.178 11:01:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.178 11:01:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.178 11:01:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.178 11:01:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.178 node0=512 expecting 512 00:05:08.178 11:01:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.178 11:01:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:08.178 ************************************ 00:05:08.178 END TEST custom_alloc 00:05:08.178 ************************************ 00:05:08.178 00:05:08.178 real 0m0.606s 00:05:08.178 user 0m0.279s 00:05:08.179 sys 0m0.327s 00:05:08.179 11:01:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.179 11:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.437 11:01:19 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:08.437 11:01:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.437 11:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.437 11:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.437 ************************************ 00:05:08.437 START TEST no_shrink_alloc 00:05:08.437 ************************************ 00:05:08.437 11:01:19 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:08.437 11:01:19 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:08.437 11:01:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.437 11:01:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.437 11:01:19 -- setup/hugepages.sh@51 -- # shift 00:05:08.437 11:01:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.437 11:01:19 -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.437 11:01:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.437 11:01:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.437 11:01:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.437 11:01:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.437 11:01:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.437 11:01:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.437 11:01:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.437 11:01:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.437 11:01:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.437 11:01:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.437 11:01:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.437 11:01:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.437 11:01:19 -- setup/hugepages.sh@73 -- # return 0 00:05:08.437 11:01:19 -- setup/hugepages.sh@198 -- # setup output 00:05:08.437 11:01:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.437 11:01:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.697 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.697 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.697 11:01:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:08.697 11:01:19 -- setup/hugepages.sh@89 -- # local node 00:05:08.697 11:01:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.697 11:01:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.697 11:01:19 -- setup/hugepages.sh@92 -- # local surp 00:05:08.697 11:01:19 -- setup/hugepages.sh@93 -- # local resv 00:05:08.697 11:01:19 -- setup/hugepages.sh@94 -- # local anon 00:05:08.697 11:01:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.697 11:01:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.697 11:01:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.697 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.697 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.697 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.697 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.697 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.697 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.697 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.697 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6662380 kB' 'MemAvailable: 9452932 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456592 kB' 'Inactive: 2658392 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120164 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182868 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100312 kB' 'KernelStack: 6604 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.697 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.697 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.698 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.698 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.698 11:01:19 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.698 11:01:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.698 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.698 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.698 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.698 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.698 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.698 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.698 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.698 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.698 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6662128 kB' 'MemAvailable: 9452680 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456764 kB' 'Inactive: 2658392 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120140 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182864 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100308 kB' 'KernelStack: 6628 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.698 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.698 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.699 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.699 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.700 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.700 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.700 11:01:19 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.700 11:01:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.700 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.700 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.700 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.700 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.700 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.700 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.700 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.700 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.700 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6661880 kB' 'MemAvailable: 9452432 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456796 kB' 'Inactive: 2658392 kB' 'Active(anon): 128968 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120112 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182848 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100292 kB' 'KernelStack: 6628 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.700 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.700 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.960 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.960 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.961 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.961 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.961 nr_hugepages=1024 00:05:08.961 11:01:19 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.961 11:01:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.961 resv_hugepages=0 00:05:08.961 11:01:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.961 surplus_hugepages=0 00:05:08.961 anon_hugepages=0 00:05:08.961 11:01:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.961 11:01:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.961 11:01:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.961 11:01:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.961 11:01:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.961 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.961 11:01:19 -- setup/common.sh@18 -- # local node= 00:05:08.961 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.961 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.961 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.961 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.961 11:01:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.961 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.961 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6661880 kB' 'MemAvailable: 9452432 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 456644 kB' 'Inactive: 2658392 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120036 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182848 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100292 kB' 'KernelStack: 6644 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.961 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.961 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.962 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.962 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.963 11:01:19 -- setup/common.sh@33 -- # echo 1024 00:05:08.963 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.963 11:01:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.963 11:01:19 -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.963 11:01:19 -- setup/hugepages.sh@27 -- # local node 00:05:08.963 11:01:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.963 11:01:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.963 11:01:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.963 11:01:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.963 11:01:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.963 11:01:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.963 11:01:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.963 11:01:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.963 11:01:19 -- setup/common.sh@18 -- # local node=0 00:05:08.963 11:01:19 -- setup/common.sh@19 -- # local var val 00:05:08.963 11:01:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.963 11:01:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.963 11:01:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.963 11:01:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.963 11:01:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.963 11:01:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6661880 kB' 'MemUsed: 5577236 kB' 'SwapCached: 0 kB' 'Active: 456644 kB' 'Inactive: 2658392 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2996708 kB' 'Mapped: 50940 kB' 'AnonPages: 120220 kB' 'Shmem: 10488 kB' 'KernelStack: 6612 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 182844 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.963 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.963 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # continue 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.964 11:01:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.964 11:01:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.964 11:01:19 -- setup/common.sh@33 -- # echo 0 00:05:08.964 11:01:19 -- setup/common.sh@33 -- # return 0 00:05:08.964 11:01:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.964 11:01:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.964 11:01:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.964 11:01:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.964 11:01:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.964 node0=1024 expecting 1024 00:05:08.964 11:01:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.964 11:01:19 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:08.964 11:01:19 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:08.964 11:01:19 -- setup/hugepages.sh@202 -- # setup output 00:05:08.964 11:01:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.964 11:01:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.222 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.222 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.222 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:09.223 11:01:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:09.223 11:01:20 -- setup/hugepages.sh@89 -- # local node 00:05:09.223 11:01:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.223 11:01:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.223 11:01:20 -- setup/hugepages.sh@92 -- # local surp 00:05:09.223 11:01:20 -- setup/hugepages.sh@93 -- # local resv 00:05:09.223 11:01:20 -- setup/hugepages.sh@94 -- # local anon 00:05:09.223 11:01:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.223 11:01:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.223 11:01:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.223 11:01:20 -- setup/common.sh@18 -- # local node= 00:05:09.223 11:01:20 -- setup/common.sh@19 -- # local var val 00:05:09.223 11:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.223 11:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.223 11:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.223 11:01:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.223 11:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.223 11:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6672272 kB' 'MemAvailable: 9462824 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 454328 kB' 'Inactive: 2658392 kB' 'Active(anon): 126500 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117644 kB' 'Mapped: 50368 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 182840 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 100284 kB' 'KernelStack: 6696 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.223 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.223 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.224 11:01:20 -- setup/common.sh@33 -- # echo 0 00:05:09.224 11:01:20 -- setup/common.sh@33 -- # return 0 00:05:09.224 11:01:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.224 11:01:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.224 11:01:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.224 11:01:20 -- setup/common.sh@18 -- # local node= 00:05:09.224 11:01:20 -- setup/common.sh@19 -- # local var val 00:05:09.224 11:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.224 11:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.224 11:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.224 11:01:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.224 11:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.224 11:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6672668 kB' 'MemAvailable: 9463216 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 453572 kB' 'Inactive: 2658392 kB' 'Active(anon): 125744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117088 kB' 'Mapped: 50144 kB' 'Shmem: 10488 kB' 'KReclaimable: 82552 kB' 'Slab: 182728 kB' 'SReclaimable: 82552 kB' 'SUnreclaim: 100176 kB' 'KernelStack: 6472 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.224 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.224 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.484 11:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.484 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.484 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.484 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.484 11:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.484 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.484 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.484 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.484 11:01:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.485 11:01:20 -- setup/common.sh@33 -- # echo 0 00:05:09.485 11:01:20 -- setup/common.sh@33 -- # return 0 00:05:09.485 11:01:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.485 11:01:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.485 11:01:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.485 11:01:20 -- setup/common.sh@18 -- # local node= 00:05:09.485 11:01:20 -- setup/common.sh@19 -- # local var val 00:05:09.485 11:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.485 11:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.485 11:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.485 11:01:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.485 11:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.485 11:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6673032 kB' 'MemAvailable: 9463580 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 453544 kB' 'Inactive: 2658392 kB' 'Active(anon): 125716 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117064 kB' 'Mapped: 49964 kB' 'Shmem: 10488 kB' 'KReclaimable: 82552 kB' 'Slab: 182692 kB' 'SReclaimable: 82552 kB' 'SUnreclaim: 100140 kB' 'KernelStack: 6496 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.485 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.485 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.486 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.486 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.486 11:01:20 -- setup/common.sh@33 -- # echo 0 00:05:09.487 11:01:20 -- setup/common.sh@33 -- # return 0 00:05:09.487 nr_hugepages=1024 00:05:09.487 11:01:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.487 11:01:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.487 resv_hugepages=0 00:05:09.487 11:01:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.487 surplus_hugepages=0 00:05:09.487 anon_hugepages=0 00:05:09.487 11:01:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.487 11:01:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.487 11:01:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.487 11:01:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.487 11:01:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.487 11:01:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.487 11:01:20 -- setup/common.sh@18 -- # local node= 00:05:09.487 11:01:20 -- setup/common.sh@19 -- # local var val 00:05:09.487 11:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.487 11:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.487 11:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.487 11:01:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.487 11:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.487 11:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6673032 kB' 'MemAvailable: 9463580 kB' 'Buffers: 2684 kB' 'Cached: 2994024 kB' 'SwapCached: 0 kB' 'Active: 453564 kB' 'Inactive: 2658392 kB' 'Active(anon): 125736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117084 kB' 'Mapped: 49964 kB' 'Shmem: 10488 kB' 'KReclaimable: 82552 kB' 'Slab: 182692 kB' 'SReclaimable: 82552 kB' 'SUnreclaim: 100140 kB' 'KernelStack: 6496 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 6094848 kB' 'DirectMap1G: 8388608 kB' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.487 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.487 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.488 11:01:20 -- setup/common.sh@33 -- # echo 1024 00:05:09.488 11:01:20 -- setup/common.sh@33 -- # return 0 00:05:09.488 11:01:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.488 11:01:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.488 11:01:20 -- setup/hugepages.sh@27 -- # local node 00:05:09.488 11:01:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.488 11:01:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.488 11:01:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.488 11:01:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.488 11:01:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.488 11:01:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.488 11:01:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.488 11:01:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.488 11:01:20 -- setup/common.sh@18 -- # local node=0 00:05:09.488 11:01:20 -- setup/common.sh@19 -- # local var val 00:05:09.488 11:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.488 11:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.488 11:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.488 11:01:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.488 11:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.488 11:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6673032 kB' 'MemUsed: 5566084 kB' 'SwapCached: 0 kB' 'Active: 453516 kB' 'Inactive: 2658392 kB' 'Active(anon): 125688 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2658392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2996708 kB' 'Mapped: 49964 kB' 'AnonPages: 117088 kB' 'Shmem: 10488 kB' 'KernelStack: 6512 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82552 kB' 'Slab: 182692 kB' 'SReclaimable: 82552 kB' 'SUnreclaim: 100140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.488 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.488 11:01:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # continue 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.489 11:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.489 11:01:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.489 11:01:20 -- setup/common.sh@33 -- # echo 0 00:05:09.489 11:01:20 -- setup/common.sh@33 -- # return 0 00:05:09.489 11:01:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.489 11:01:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.489 node0=1024 expecting 1024 00:05:09.489 11:01:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.489 11:01:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.489 11:01:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.489 11:01:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.489 00:05:09.489 real 0m1.171s 00:05:09.489 user 0m0.579s 00:05:09.489 sys 0m0.606s 00:05:09.489 11:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.489 ************************************ 00:05:09.489 END TEST no_shrink_alloc 00:05:09.489 ************************************ 00:05:09.489 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.489 11:01:20 -- setup/hugepages.sh@217 -- # clear_hp 00:05:09.489 11:01:20 -- setup/hugepages.sh@37 -- # local node hp 00:05:09.489 11:01:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.489 11:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.489 11:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:09.489 11:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.489 11:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:09.489 11:01:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:09.489 11:01:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:09.489 ************************************ 00:05:09.489 END TEST hugepages 00:05:09.489 ************************************ 00:05:09.489 00:05:09.489 real 0m5.218s 00:05:09.489 user 0m2.453s 00:05:09.489 sys 0m2.638s 00:05:09.489 11:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.489 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.489 11:01:20 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:09.489 11:01:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.489 11:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.489 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.489 ************************************ 00:05:09.489 START TEST driver 00:05:09.489 ************************************ 00:05:09.489 11:01:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:09.748 * Looking for test storage... 00:05:09.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.748 11:01:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.748 11:01:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.748 11:01:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.748 11:01:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.748 11:01:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.748 11:01:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.748 11:01:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.748 11:01:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.748 11:01:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.748 11:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.748 11:01:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.748 11:01:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.748 11:01:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.748 11:01:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.748 11:01:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.748 11:01:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.748 11:01:20 -- scripts/common.sh@344 -- # : 1 00:05:09.748 11:01:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.748 11:01:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.748 11:01:20 -- scripts/common.sh@364 -- # decimal 1 00:05:09.748 11:01:20 -- scripts/common.sh@352 -- # local d=1 00:05:09.748 11:01:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.748 11:01:20 -- scripts/common.sh@354 -- # echo 1 00:05:09.748 11:01:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.748 11:01:20 -- scripts/common.sh@365 -- # decimal 2 00:05:09.748 11:01:20 -- scripts/common.sh@352 -- # local d=2 00:05:09.748 11:01:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.748 11:01:20 -- scripts/common.sh@354 -- # echo 2 00:05:09.748 11:01:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.748 11:01:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.748 11:01:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.748 11:01:20 -- scripts/common.sh@367 -- # return 0 00:05:09.748 11:01:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.748 11:01:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.748 --rc genhtml_branch_coverage=1 00:05:09.748 --rc genhtml_function_coverage=1 00:05:09.748 --rc genhtml_legend=1 00:05:09.748 --rc geninfo_all_blocks=1 00:05:09.748 --rc geninfo_unexecuted_blocks=1 00:05:09.748 00:05:09.748 ' 00:05:09.748 11:01:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.748 --rc genhtml_branch_coverage=1 00:05:09.748 --rc genhtml_function_coverage=1 00:05:09.748 --rc genhtml_legend=1 00:05:09.748 --rc geninfo_all_blocks=1 00:05:09.748 --rc geninfo_unexecuted_blocks=1 00:05:09.748 00:05:09.748 ' 00:05:09.748 11:01:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.748 --rc genhtml_branch_coverage=1 00:05:09.748 --rc genhtml_function_coverage=1 00:05:09.748 --rc genhtml_legend=1 00:05:09.748 --rc geninfo_all_blocks=1 00:05:09.748 --rc geninfo_unexecuted_blocks=1 00:05:09.748 00:05:09.748 ' 00:05:09.748 11:01:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.748 --rc genhtml_branch_coverage=1 00:05:09.748 --rc genhtml_function_coverage=1 00:05:09.748 --rc genhtml_legend=1 00:05:09.748 --rc geninfo_all_blocks=1 00:05:09.748 --rc geninfo_unexecuted_blocks=1 00:05:09.748 00:05:09.748 ' 00:05:09.748 11:01:20 -- setup/driver.sh@68 -- # setup reset 00:05:09.748 11:01:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.748 11:01:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.314 11:01:21 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:10.314 11:01:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.314 11:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.314 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 START TEST guess_driver 00:05:10.314 ************************************ 00:05:10.314 11:01:21 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:10.314 11:01:21 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:10.314 11:01:21 -- setup/driver.sh@47 -- # local fail=0 00:05:10.314 11:01:21 -- setup/driver.sh@49 -- # pick_driver 00:05:10.314 11:01:21 -- setup/driver.sh@36 -- # vfio 00:05:10.314 11:01:21 -- setup/driver.sh@21 -- # local iommu_grups 00:05:10.314 11:01:21 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:10.314 11:01:21 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:10.314 11:01:21 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:10.314 11:01:21 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:10.314 11:01:21 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:10.314 11:01:21 -- setup/driver.sh@32 -- # return 1 00:05:10.314 11:01:21 -- setup/driver.sh@38 -- # uio 00:05:10.314 11:01:21 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:10.314 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:10.314 11:01:21 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:10.314 Looking for driver=uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:10.314 11:01:21 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:10.314 11:01:21 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:10.314 11:01:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.314 11:01:21 -- setup/driver.sh@45 -- # setup output config 00:05:10.315 11:01:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.315 11:01:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.250 11:01:22 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:11.250 11:01:22 -- setup/driver.sh@58 -- # continue 00:05:11.250 11:01:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.250 11:01:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:11.250 11:01:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:11.250 11:01:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.250 11:01:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:11.250 11:01:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:11.250 11:01:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.250 11:01:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:11.250 11:01:22 -- setup/driver.sh@65 -- # setup reset 00:05:11.250 11:01:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.250 11:01:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.818 00:05:11.818 real 0m1.459s 00:05:11.818 user 0m0.553s 00:05:11.818 sys 0m0.910s 00:05:11.818 11:01:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.818 ************************************ 00:05:11.818 END TEST guess_driver 00:05:11.818 ************************************ 00:05:11.818 11:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.818 ************************************ 00:05:11.818 END TEST driver 00:05:11.818 ************************************ 00:05:11.818 00:05:11.818 real 0m2.296s 00:05:11.818 user 0m0.901s 00:05:11.818 sys 0m1.448s 00:05:11.818 11:01:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.818 11:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.818 11:01:22 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:11.818 11:01:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.818 11:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.818 11:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.818 ************************************ 00:05:11.818 START TEST devices 00:05:11.818 ************************************ 00:05:11.818 11:01:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.077 * Looking for test storage... 00:05:12.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.077 11:01:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.077 11:01:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.077 11:01:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.077 11:01:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.077 11:01:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.077 11:01:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.077 11:01:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.077 11:01:23 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.077 11:01:23 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.077 11:01:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.077 11:01:23 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.077 11:01:23 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.077 11:01:23 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.077 11:01:23 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.077 11:01:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.077 11:01:23 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.078 11:01:23 -- scripts/common.sh@344 -- # : 1 00:05:12.078 11:01:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.078 11:01:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.078 11:01:23 -- scripts/common.sh@364 -- # decimal 1 00:05:12.078 11:01:23 -- scripts/common.sh@352 -- # local d=1 00:05:12.078 11:01:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.078 11:01:23 -- scripts/common.sh@354 -- # echo 1 00:05:12.078 11:01:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.078 11:01:23 -- scripts/common.sh@365 -- # decimal 2 00:05:12.078 11:01:23 -- scripts/common.sh@352 -- # local d=2 00:05:12.078 11:01:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.078 11:01:23 -- scripts/common.sh@354 -- # echo 2 00:05:12.078 11:01:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.078 11:01:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.078 11:01:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.078 11:01:23 -- scripts/common.sh@367 -- # return 0 00:05:12.078 11:01:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.078 11:01:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.078 --rc genhtml_branch_coverage=1 00:05:12.078 --rc genhtml_function_coverage=1 00:05:12.078 --rc genhtml_legend=1 00:05:12.078 --rc geninfo_all_blocks=1 00:05:12.078 --rc geninfo_unexecuted_blocks=1 00:05:12.078 00:05:12.078 ' 00:05:12.078 11:01:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.078 --rc genhtml_branch_coverage=1 00:05:12.078 --rc genhtml_function_coverage=1 00:05:12.078 --rc genhtml_legend=1 00:05:12.078 --rc geninfo_all_blocks=1 00:05:12.078 --rc geninfo_unexecuted_blocks=1 00:05:12.078 00:05:12.078 ' 00:05:12.078 11:01:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.078 --rc genhtml_branch_coverage=1 00:05:12.078 --rc genhtml_function_coverage=1 00:05:12.078 --rc genhtml_legend=1 00:05:12.078 --rc geninfo_all_blocks=1 00:05:12.078 --rc geninfo_unexecuted_blocks=1 00:05:12.078 00:05:12.078 ' 00:05:12.078 11:01:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.078 --rc genhtml_branch_coverage=1 00:05:12.078 --rc genhtml_function_coverage=1 00:05:12.078 --rc genhtml_legend=1 00:05:12.078 --rc geninfo_all_blocks=1 00:05:12.078 --rc geninfo_unexecuted_blocks=1 00:05:12.078 00:05:12.078 ' 00:05:12.078 11:01:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:12.078 11:01:23 -- setup/devices.sh@192 -- # setup reset 00:05:12.078 11:01:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.078 11:01:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.014 11:01:23 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:13.014 11:01:23 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:13.014 11:01:23 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:13.014 11:01:23 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:13.014 11:01:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.014 11:01:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:13.015 11:01:23 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:13.015 11:01:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.015 11:01:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:13.015 11:01:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:13.015 11:01:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.015 11:01:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:13.015 11:01:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:13.015 11:01:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:13.015 11:01:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:13.015 11:01:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:13.015 11:01:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.015 11:01:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:13.015 11:01:23 -- setup/devices.sh@196 -- # blocks=() 00:05:13.015 11:01:23 -- setup/devices.sh@196 -- # declare -a blocks 00:05:13.015 11:01:23 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:13.015 11:01:23 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:13.015 11:01:23 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:13.015 11:01:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.015 11:01:23 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:13.015 11:01:23 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:13.015 11:01:23 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:13.015 11:01:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:13.015 11:01:23 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:13.015 11:01:23 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:13.015 11:01:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:13.015 No valid GPT data, bailing 00:05:13.015 11:01:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.015 11:01:23 -- scripts/common.sh@393 -- # pt= 00:05:13.015 11:01:23 -- scripts/common.sh@394 -- # return 1 00:05:13.015 11:01:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:13.015 11:01:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:13.015 11:01:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:13.015 11:01:23 -- setup/common.sh@80 -- # echo 5368709120 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:13.015 11:01:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.015 11:01:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:13.015 11:01:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.015 11:01:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.015 11:01:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:13.015 11:01:24 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:13.015 11:01:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:13.015 No valid GPT data, bailing 00:05:13.015 11:01:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.015 11:01:24 -- scripts/common.sh@393 -- # pt= 00:05:13.015 11:01:24 -- scripts/common.sh@394 -- # return 1 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:13.015 11:01:24 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:13.015 11:01:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:13.015 11:01:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:13.015 11:01:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.015 11:01:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.015 11:01:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.015 11:01:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.015 11:01:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:13.015 11:01:24 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:13.015 11:01:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:13.015 No valid GPT data, bailing 00:05:13.015 11:01:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:13.015 11:01:24 -- scripts/common.sh@393 -- # pt= 00:05:13.015 11:01:24 -- scripts/common.sh@394 -- # return 1 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:13.015 11:01:24 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:13.015 11:01:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:13.015 11:01:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:13.015 11:01:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.015 11:01:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.015 11:01:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:13.015 11:01:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:13.015 11:01:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.015 11:01:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.015 11:01:24 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:13.015 11:01:24 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:13.015 11:01:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:13.274 No valid GPT data, bailing 00:05:13.274 11:01:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:13.274 11:01:24 -- scripts/common.sh@393 -- # pt= 00:05:13.274 11:01:24 -- scripts/common.sh@394 -- # return 1 00:05:13.274 11:01:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:13.274 11:01:24 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:13.274 11:01:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:13.274 11:01:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:13.274 11:01:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:13.274 11:01:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.274 11:01:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.274 11:01:24 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:13.274 11:01:24 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:13.274 11:01:24 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:13.274 11:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.274 11:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.274 11:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.274 ************************************ 00:05:13.274 START TEST nvme_mount 00:05:13.274 ************************************ 00:05:13.274 11:01:24 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:13.274 11:01:24 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:13.274 11:01:24 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:13.274 11:01:24 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.274 11:01:24 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.274 11:01:24 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:13.274 11:01:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:13.274 11:01:24 -- setup/common.sh@40 -- # local part_no=1 00:05:13.274 11:01:24 -- setup/common.sh@41 -- # local size=1073741824 00:05:13.274 11:01:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.274 11:01:24 -- setup/common.sh@44 -- # parts=() 00:05:13.274 11:01:24 -- setup/common.sh@44 -- # local parts 00:05:13.274 11:01:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.274 11:01:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.274 11:01:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.274 11:01:24 -- setup/common.sh@46 -- # (( part++ )) 00:05:13.274 11:01:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.274 11:01:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:13.274 11:01:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:13.274 11:01:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:14.208 Creating new GPT entries in memory. 00:05:14.208 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.208 other utilities. 00:05:14.208 11:01:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.208 11:01:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.208 11:01:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.208 11:01:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.208 11:01:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:15.143 Creating new GPT entries in memory. 00:05:15.143 The operation has completed successfully. 00:05:15.401 11:01:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:15.401 11:01:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.401 11:01:26 -- setup/common.sh@62 -- # wait 64147 00:05:15.401 11:01:26 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.401 11:01:26 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:15.401 11:01:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.401 11:01:26 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:15.401 11:01:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:15.401 11:01:26 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.401 11:01:26 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.401 11:01:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:15.401 11:01:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:15.401 11:01:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.401 11:01:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.401 11:01:26 -- setup/devices.sh@53 -- # local found=0 00:05:15.401 11:01:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.401 11:01:26 -- setup/devices.sh@56 -- # : 00:05:15.401 11:01:26 -- setup/devices.sh@59 -- # local pci status 00:05:15.401 11:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.401 11:01:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:15.401 11:01:26 -- setup/devices.sh@47 -- # setup output config 00:05:15.401 11:01:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.401 11:01:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.659 11:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.659 11:01:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:15.659 11:01:26 -- setup/devices.sh@63 -- # found=1 00:05:15.659 11:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.659 11:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.659 11:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.918 11:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.918 11:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.918 11:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.918 11:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.918 11:01:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.918 11:01:27 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.918 11:01:27 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.918 11:01:27 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.918 11:01:27 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.918 11:01:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.918 11:01:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.918 11:01:27 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.265 11:01:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.265 11:01:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.265 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.265 11:01:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.265 11:01:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.265 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.265 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.265 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.265 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.265 11:01:27 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:16.265 11:01:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:16.265 11:01:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.265 11:01:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:16.265 11:01:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:16.265 11:01:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.265 11:01:27 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.265 11:01:27 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:16.265 11:01:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:16.265 11:01:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.265 11:01:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.265 11:01:27 -- setup/devices.sh@53 -- # local found=0 00:05:16.265 11:01:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.265 11:01:27 -- setup/devices.sh@56 -- # : 00:05:16.265 11:01:27 -- setup/devices.sh@59 -- # local pci status 00:05:16.265 11:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.265 11:01:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:16.265 11:01:27 -- setup/devices.sh@47 -- # setup output config 00:05:16.265 11:01:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.265 11:01:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.549 11:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.549 11:01:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:16.549 11:01:27 -- setup/devices.sh@63 -- # found=1 00:05:16.549 11:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.549 11:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.549 11:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.812 11:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.812 11:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.071 11:01:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.071 11:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.071 11:01:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.071 11:01:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:17.071 11:01:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.071 11:01:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.071 11:01:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:17.071 11:01:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.071 11:01:28 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:17.071 11:01:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:17.071 11:01:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:17.071 11:01:28 -- setup/devices.sh@50 -- # local mount_point= 00:05:17.071 11:01:28 -- setup/devices.sh@51 -- # local test_file= 00:05:17.071 11:01:28 -- setup/devices.sh@53 -- # local found=0 00:05:17.071 11:01:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:17.071 11:01:28 -- setup/devices.sh@59 -- # local pci status 00:05:17.071 11:01:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.071 11:01:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.071 11:01:28 -- setup/devices.sh@47 -- # setup output config 00:05:17.071 11:01:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.071 11:01:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.330 11:01:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.330 11:01:28 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.330 11:01:28 -- setup/devices.sh@63 -- # found=1 00:05:17.330 11:01:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.330 11:01:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.330 11:01:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.588 11:01:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.588 11:01:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.588 11:01:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:17.588 11:01:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.847 11:01:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.847 11:01:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.847 11:01:28 -- setup/devices.sh@68 -- # return 0 00:05:17.847 11:01:28 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.847 11:01:28 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.847 11:01:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.847 11:01:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.847 11:01:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.847 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.847 ************************************ 00:05:17.847 END TEST nvme_mount 00:05:17.847 ************************************ 00:05:17.847 00:05:17.847 real 0m4.570s 00:05:17.847 user 0m1.043s 00:05:17.847 sys 0m1.198s 00:05:17.847 11:01:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.847 11:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 11:01:28 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.847 11:01:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.847 11:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.847 11:01:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 ************************************ 00:05:17.847 START TEST dm_mount 00:05:17.847 ************************************ 00:05:17.847 11:01:28 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:17.847 11:01:28 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.847 11:01:28 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.847 11:01:28 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.847 11:01:28 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.847 11:01:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.847 11:01:28 -- setup/common.sh@40 -- # local part_no=2 00:05:17.847 11:01:28 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.847 11:01:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.847 11:01:28 -- setup/common.sh@44 -- # parts=() 00:05:17.847 11:01:28 -- setup/common.sh@44 -- # local parts 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.847 11:01:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.847 11:01:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.847 11:01:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.847 11:01:28 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.847 11:01:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.847 11:01:28 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.783 Creating new GPT entries in memory. 00:05:18.783 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.783 other utilities. 00:05:18.783 11:01:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.783 11:01:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.783 11:01:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.783 11:01:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.783 11:01:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:20.161 Creating new GPT entries in memory. 00:05:20.161 The operation has completed successfully. 00:05:20.161 11:01:30 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.161 11:01:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.161 11:01:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.161 11:01:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.161 11:01:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:21.097 The operation has completed successfully. 00:05:21.097 11:01:31 -- setup/common.sh@57 -- # (( part++ )) 00:05:21.097 11:01:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.097 11:01:31 -- setup/common.sh@62 -- # wait 64608 00:05:21.097 11:01:31 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.097 11:01:31 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.097 11:01:31 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.097 11:01:31 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.097 11:01:31 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.097 11:01:31 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.097 11:01:32 -- setup/devices.sh@161 -- # break 00:05:21.097 11:01:32 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.097 11:01:32 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.097 11:01:32 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.098 11:01:32 -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.098 11:01:32 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.098 11:01:32 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.098 11:01:32 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.098 11:01:32 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:21.098 11:01:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.098 11:01:32 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.098 11:01:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.098 11:01:32 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.098 11:01:32 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.098 11:01:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.098 11:01:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.098 11:01:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.098 11:01:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.098 11:01:32 -- setup/devices.sh@53 -- # local found=0 00:05:21.098 11:01:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.098 11:01:32 -- setup/devices.sh@56 -- # : 00:05:21.098 11:01:32 -- setup/devices.sh@59 -- # local pci status 00:05:21.098 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.098 11:01:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.098 11:01:32 -- setup/devices.sh@47 -- # setup output config 00:05:21.098 11:01:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.098 11:01:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.098 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.098 11:01:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.098 11:01:32 -- setup/devices.sh@63 -- # found=1 00:05:21.098 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.098 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.098 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.666 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.666 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.666 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.666 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.666 11:01:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.666 11:01:32 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.666 11:01:32 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.666 11:01:32 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.666 11:01:32 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.666 11:01:32 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.666 11:01:32 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:21.666 11:01:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.666 11:01:32 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:21.666 11:01:32 -- setup/devices.sh@50 -- # local mount_point= 00:05:21.666 11:01:32 -- setup/devices.sh@51 -- # local test_file= 00:05:21.666 11:01:32 -- setup/devices.sh@53 -- # local found=0 00:05:21.666 11:01:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.666 11:01:32 -- setup/devices.sh@59 -- # local pci status 00:05:21.666 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.666 11:01:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.666 11:01:32 -- setup/devices.sh@47 -- # setup output config 00:05:21.666 11:01:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.666 11:01:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.925 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.925 11:01:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.925 11:01:32 -- setup/devices.sh@63 -- # found=1 00:05:21.925 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.925 11:01:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.925 11:01:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.183 11:01:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.183 11:01:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.442 11:01:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.442 11:01:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.442 11:01:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.442 11:01:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.442 11:01:33 -- setup/devices.sh@68 -- # return 0 00:05:22.442 11:01:33 -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.442 11:01:33 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.442 11:01:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.442 11:01:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.442 11:01:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.442 11:01:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.442 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.442 11:01:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.442 11:01:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.442 00:05:22.442 real 0m4.628s 00:05:22.442 user 0m0.719s 00:05:22.442 sys 0m0.830s 00:05:22.442 11:01:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.442 11:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.442 ************************************ 00:05:22.442 END TEST dm_mount 00:05:22.442 ************************************ 00:05:22.442 11:01:33 -- setup/devices.sh@1 -- # cleanup 00:05:22.442 11:01:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.442 11:01:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.442 11:01:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.442 11:01:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.442 11:01:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.442 11:01:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.701 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.701 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.701 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.701 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.701 11:01:33 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.701 11:01:33 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.701 11:01:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.701 11:01:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.701 11:01:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.701 11:01:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.701 11:01:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.701 00:05:22.701 real 0m10.846s 00:05:22.701 user 0m2.527s 00:05:22.701 sys 0m2.622s 00:05:22.701 11:01:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.701 ************************************ 00:05:22.701 END TEST devices 00:05:22.701 ************************************ 00:05:22.701 11:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.701 00:05:22.701 real 0m23.242s 00:05:22.701 user 0m8.064s 00:05:22.701 sys 0m9.373s 00:05:22.701 11:01:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.701 11:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.701 ************************************ 00:05:22.701 END TEST setup.sh 00:05:22.701 ************************************ 00:05:22.959 11:01:33 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.959 Hugepages 00:05:22.959 node hugesize free / total 00:05:22.959 node0 1048576kB 0 / 0 00:05:22.959 node0 2048kB 2048 / 2048 00:05:22.959 00:05:22.959 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.959 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.218 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.218 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:23.218 11:01:34 -- spdk/autotest.sh@128 -- # uname -s 00:05:23.218 11:01:34 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:23.218 11:01:34 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:23.218 11:01:34 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.046 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.046 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.046 11:01:35 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:24.978 11:01:36 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:24.978 11:01:36 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:24.978 11:01:36 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.978 11:01:36 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:24.978 11:01:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:24.978 11:01:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:24.978 11:01:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.978 11:01:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.978 11:01:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:25.235 11:01:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:25.235 11:01:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:25.235 11:01:36 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.492 Waiting for block devices as requested 00:05:25.492 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.492 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.751 11:01:36 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:25.751 11:01:36 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:25.751 11:01:36 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:25.751 11:01:36 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:25.751 11:01:36 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1552 -- # continue 00:05:25.751 11:01:36 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:25.751 11:01:36 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.751 11:01:36 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:25.751 11:01:36 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:25.751 11:01:36 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:25.751 11:01:36 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:25.751 11:01:36 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:25.751 11:01:36 -- common/autotest_common.sh@1552 -- # continue 00:05:25.751 11:01:36 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:25.751 11:01:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.751 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.751 11:01:36 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:25.751 11:01:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.751 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.751 11:01:36 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.575 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.575 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.575 11:01:37 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:26.575 11:01:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.575 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.575 11:01:37 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:26.575 11:01:37 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:26.575 11:01:37 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.575 11:01:37 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:26.575 11:01:37 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:26.575 11:01:37 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:26.575 11:01:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:26.575 11:01:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:26.576 11:01:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.576 11:01:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.576 11:01:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:26.834 11:01:37 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:26.834 11:01:37 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:26.834 11:01:37 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:26.834 11:01:37 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:26.834 11:01:37 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:26.834 11:01:37 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.834 11:01:37 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:26.834 11:01:37 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:26.834 11:01:37 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:26.834 11:01:37 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.834 11:01:37 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:26.834 11:01:37 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:26.834 11:01:37 -- common/autotest_common.sh@1588 -- # return 0 00:05:26.834 11:01:37 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:26.834 11:01:37 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:26.834 11:01:37 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:26.834 11:01:37 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:26.834 11:01:37 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:26.834 11:01:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.834 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.834 11:01:37 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.834 11:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.834 11:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.834 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.834 ************************************ 00:05:26.834 START TEST env 00:05:26.834 ************************************ 00:05:26.834 11:01:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.834 * Looking for test storage... 00:05:26.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:26.834 11:01:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.834 11:01:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.834 11:01:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.834 11:01:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.834 11:01:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.834 11:01:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.834 11:01:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.834 11:01:37 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.834 11:01:37 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.834 11:01:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.834 11:01:37 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.834 11:01:37 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.834 11:01:37 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.834 11:01:37 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.834 11:01:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.834 11:01:37 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.834 11:01:37 -- scripts/common.sh@344 -- # : 1 00:05:26.834 11:01:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.834 11:01:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.834 11:01:37 -- scripts/common.sh@364 -- # decimal 1 00:05:26.834 11:01:37 -- scripts/common.sh@352 -- # local d=1 00:05:26.834 11:01:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.834 11:01:37 -- scripts/common.sh@354 -- # echo 1 00:05:26.834 11:01:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.834 11:01:37 -- scripts/common.sh@365 -- # decimal 2 00:05:26.834 11:01:37 -- scripts/common.sh@352 -- # local d=2 00:05:26.834 11:01:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.834 11:01:37 -- scripts/common.sh@354 -- # echo 2 00:05:26.834 11:01:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.834 11:01:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.834 11:01:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.834 11:01:37 -- scripts/common.sh@367 -- # return 0 00:05:26.834 11:01:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.834 11:01:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.834 --rc genhtml_branch_coverage=1 00:05:26.834 --rc genhtml_function_coverage=1 00:05:26.834 --rc genhtml_legend=1 00:05:26.834 --rc geninfo_all_blocks=1 00:05:26.834 --rc geninfo_unexecuted_blocks=1 00:05:26.834 00:05:26.835 ' 00:05:26.835 11:01:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.835 --rc genhtml_branch_coverage=1 00:05:26.835 --rc genhtml_function_coverage=1 00:05:26.835 --rc genhtml_legend=1 00:05:26.835 --rc geninfo_all_blocks=1 00:05:26.835 --rc geninfo_unexecuted_blocks=1 00:05:26.835 00:05:26.835 ' 00:05:26.835 11:01:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.835 --rc genhtml_branch_coverage=1 00:05:26.835 --rc genhtml_function_coverage=1 00:05:26.835 --rc genhtml_legend=1 00:05:26.835 --rc geninfo_all_blocks=1 00:05:26.835 --rc geninfo_unexecuted_blocks=1 00:05:26.835 00:05:26.835 ' 00:05:26.835 11:01:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.835 --rc genhtml_branch_coverage=1 00:05:26.835 --rc genhtml_function_coverage=1 00:05:26.835 --rc genhtml_legend=1 00:05:26.835 --rc geninfo_all_blocks=1 00:05:26.835 --rc geninfo_unexecuted_blocks=1 00:05:26.835 00:05:26.835 ' 00:05:26.835 11:01:37 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.835 11:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.835 11:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.835 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:27.094 ************************************ 00:05:27.094 START TEST env_memory 00:05:27.094 ************************************ 00:05:27.094 11:01:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.094 00:05:27.094 00:05:27.094 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.094 http://cunit.sourceforge.net/ 00:05:27.094 00:05:27.094 00:05:27.094 Suite: memory 00:05:27.094 Test: alloc and free memory map ...[2024-12-06 11:01:38.037290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.094 passed 00:05:27.094 Test: mem map translation ...[2024-12-06 11:01:38.068362] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.094 [2024-12-06 11:01:38.068599] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.094 [2024-12-06 11:01:38.068782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.094 [2024-12-06 11:01:38.068798] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.094 passed 00:05:27.094 Test: mem map registration ...[2024-12-06 11:01:38.135438] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:27.094 [2024-12-06 11:01:38.136018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:27.094 passed 00:05:27.094 Test: mem map adjacent registrations ...passed 00:05:27.094 00:05:27.094 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.094 suites 1 1 n/a 0 0 00:05:27.094 tests 4 4 4 0 0 00:05:27.094 asserts 152 152 152 0 n/a 00:05:27.094 00:05:27.094 Elapsed time = 0.215 seconds 00:05:27.094 00:05:27.094 real 0m0.235s 00:05:27.094 user 0m0.212s 00:05:27.094 sys 0m0.015s 00:05:27.094 11:01:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.094 ************************************ 00:05:27.094 END TEST env_memory 00:05:27.094 ************************************ 00:05:27.094 11:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.354 11:01:38 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.354 11:01:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.354 11:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.354 11:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.354 ************************************ 00:05:27.354 START TEST env_vtophys 00:05:27.354 ************************************ 00:05:27.354 11:01:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.354 EAL: lib.eal log level changed from notice to debug 00:05:27.354 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.354 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.354 EAL: Maximum logical cores by configuration: 128 00:05:27.354 EAL: Detected CPU lcores: 10 00:05:27.354 EAL: Detected NUMA nodes: 1 00:05:27.354 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:27.354 EAL: Detected shared linkage of DPDK 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:27.354 EAL: Registered [vdev] bus. 00:05:27.354 EAL: bus.vdev log level changed from disabled to notice 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:27.354 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.354 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:27.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:27.354 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: Selected IOVA mode 'PA' 00:05:27.354 EAL: Probing VFIO support... 00:05:27.354 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.354 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.354 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.354 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.354 EAL: Setting up physically contiguous memory... 00:05:27.354 EAL: Setting maximum number of open files to 524288 00:05:27.354 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.354 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.354 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.354 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.354 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.354 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.354 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.354 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.354 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.354 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.354 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.354 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.354 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.354 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.354 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.354 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.354 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.354 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.354 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.354 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.354 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.354 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.354 EAL: Hugepages will be freed exactly as allocated. 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: TSC frequency is ~2200000 KHz 00:05:27.354 EAL: Main lcore 0 is ready (tid=7f40f0248a00;cpuset=[0]) 00:05:27.354 EAL: Trying to obtain current memory policy. 00:05:27.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.354 EAL: Restoring previous memory policy: 0 00:05:27.354 EAL: request: mp_malloc_sync 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.354 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.354 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.354 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.354 00:05:27.354 00:05:27.354 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.354 http://cunit.sourceforge.net/ 00:05:27.354 00:05:27.354 00:05:27.354 Suite: components_suite 00:05:27.354 Test: vtophys_malloc_test ...passed 00:05:27.354 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.354 EAL: Restoring previous memory policy: 4 00:05:27.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.354 EAL: request: mp_malloc_sync 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.354 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.354 EAL: request: mp_malloc_sync 00:05:27.354 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.355 EAL: Restoring previous memory policy: 4 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.355 EAL: Restoring previous memory policy: 4 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.355 EAL: Restoring previous memory policy: 4 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.355 EAL: Restoring previous memory policy: 4 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.355 EAL: Restoring previous memory policy: 4 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.355 EAL: request: mp_malloc_sync 00:05:27.355 EAL: No shared files mode enabled, IPC is disabled 00:05:27.355 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.355 EAL: Trying to obtain current memory policy. 00:05:27.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.614 EAL: Restoring previous memory policy: 4 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.614 EAL: request: mp_malloc_sync 00:05:27.614 EAL: No shared files mode enabled, IPC is disabled 00:05:27.614 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.614 EAL: request: mp_malloc_sync 00:05:27.614 EAL: No shared files mode enabled, IPC is disabled 00:05:27.614 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.614 EAL: Trying to obtain current memory policy. 00:05:27.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.614 EAL: Restoring previous memory policy: 4 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.614 EAL: request: mp_malloc_sync 00:05:27.614 EAL: No shared files mode enabled, IPC is disabled 00:05:27.614 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.614 EAL: request: mp_malloc_sync 00:05:27.614 EAL: No shared files mode enabled, IPC is disabled 00:05:27.614 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.614 EAL: Trying to obtain current memory policy. 00:05:27.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.614 EAL: Restoring previous memory policy: 4 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.614 EAL: request: mp_malloc_sync 00:05:27.614 EAL: No shared files mode enabled, IPC is disabled 00:05:27.614 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.614 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.873 EAL: request: mp_malloc_sync 00:05:27.873 EAL: No shared files mode enabled, IPC is disabled 00:05:27.873 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.873 EAL: Trying to obtain current memory policy. 00:05:27.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.873 EAL: Restoring previous memory policy: 4 00:05:27.873 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.873 EAL: request: mp_malloc_sync 00:05:27.873 EAL: No shared files mode enabled, IPC is disabled 00:05:27.873 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.132 passed 00:05:28.132 00:05:28.132 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.132 suites 1 1 n/a 0 0 00:05:28.132 tests 2 2 2 0 0 00:05:28.132 asserts 5218 5218 5218 0 n/a 00:05:28.132 00:05:28.132 Elapsed time = 0.687 seconds 00:05:28.132 EAL: request: mp_malloc_sync 00:05:28.132 EAL: No shared files mode enabled, IPC is disabled 00:05:28.132 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.132 EAL: request: mp_malloc_sync 00:05:28.132 EAL: No shared files mode enabled, IPC is disabled 00:05:28.132 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.132 EAL: No shared files mode enabled, IPC is disabled 00:05:28.132 EAL: No shared files mode enabled, IPC is disabled 00:05:28.132 EAL: No shared files mode enabled, IPC is disabled 00:05:28.132 ************************************ 00:05:28.132 END TEST env_vtophys 00:05:28.132 ************************************ 00:05:28.132 00:05:28.132 real 0m0.885s 00:05:28.132 user 0m0.453s 00:05:28.132 sys 0m0.298s 00:05:28.132 11:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.132 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.132 11:01:39 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.132 11:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.132 11:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.132 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.132 ************************************ 00:05:28.132 START TEST env_pci 00:05:28.132 ************************************ 00:05:28.132 11:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.132 00:05:28.132 00:05:28.132 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.132 http://cunit.sourceforge.net/ 00:05:28.132 00:05:28.132 00:05:28.132 Suite: pci 00:05:28.132 Test: pci_hook ...[2024-12-06 11:01:39.231192] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65740 has claimed it 00:05:28.132 passed 00:05:28.132 00:05:28.132 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.132 suites 1 1 n/a 0 0 00:05:28.132 tests 1 1 1 0 0 00:05:28.132 asserts 25 25 25 0 n/a 00:05:28.132 00:05:28.132 Elapsed time = 0.002 seconds 00:05:28.132 EAL: Cannot find device (10000:00:01.0) 00:05:28.132 EAL: Failed to attach device on primary process 00:05:28.132 00:05:28.132 real 0m0.021s 00:05:28.132 user 0m0.007s 00:05:28.132 sys 0m0.013s 00:05:28.132 ************************************ 00:05:28.132 END TEST env_pci 00:05:28.132 ************************************ 00:05:28.132 11:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.132 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.392 11:01:39 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.392 11:01:39 -- env/env.sh@15 -- # uname 00:05:28.392 11:01:39 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.392 11:01:39 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.392 11:01:39 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.392 11:01:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:28.392 11:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.392 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.392 ************************************ 00:05:28.392 START TEST env_dpdk_post_init 00:05:28.392 ************************************ 00:05:28.392 11:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.392 EAL: Detected CPU lcores: 10 00:05:28.392 EAL: Detected NUMA nodes: 1 00:05:28.392 EAL: Detected shared linkage of DPDK 00:05:28.392 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.392 EAL: Selected IOVA mode 'PA' 00:05:28.392 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.392 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:28.392 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:28.392 Starting DPDK initialization... 00:05:28.392 Starting SPDK post initialization... 00:05:28.392 SPDK NVMe probe 00:05:28.392 Attaching to 0000:00:06.0 00:05:28.392 Attaching to 0000:00:07.0 00:05:28.392 Attached to 0000:00:06.0 00:05:28.392 Attached to 0000:00:07.0 00:05:28.392 Cleaning up... 00:05:28.392 ************************************ 00:05:28.392 END TEST env_dpdk_post_init 00:05:28.392 ************************************ 00:05:28.392 00:05:28.392 real 0m0.175s 00:05:28.392 user 0m0.044s 00:05:28.392 sys 0m0.031s 00:05:28.392 11:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.392 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.392 11:01:39 -- env/env.sh@26 -- # uname 00:05:28.392 11:01:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.392 11:01:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.392 11:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.392 11:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.392 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.392 ************************************ 00:05:28.392 START TEST env_mem_callbacks 00:05:28.392 ************************************ 00:05:28.392 11:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.651 EAL: Detected CPU lcores: 10 00:05:28.651 EAL: Detected NUMA nodes: 1 00:05:28.651 EAL: Detected shared linkage of DPDK 00:05:28.651 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.651 EAL: Selected IOVA mode 'PA' 00:05:28.651 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.651 00:05:28.651 00:05:28.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.651 http://cunit.sourceforge.net/ 00:05:28.651 00:05:28.651 00:05:28.651 Suite: memory 00:05:28.651 Test: test ... 00:05:28.651 register 0x200000200000 2097152 00:05:28.651 malloc 3145728 00:05:28.651 register 0x200000400000 4194304 00:05:28.651 buf 0x200000500000 len 3145728 PASSED 00:05:28.651 malloc 64 00:05:28.651 buf 0x2000004fff40 len 64 PASSED 00:05:28.651 malloc 4194304 00:05:28.651 register 0x200000800000 6291456 00:05:28.651 buf 0x200000a00000 len 4194304 PASSED 00:05:28.651 free 0x200000500000 3145728 00:05:28.651 free 0x2000004fff40 64 00:05:28.651 unregister 0x200000400000 4194304 PASSED 00:05:28.651 free 0x200000a00000 4194304 00:05:28.651 unregister 0x200000800000 6291456 PASSED 00:05:28.651 malloc 8388608 00:05:28.651 register 0x200000400000 10485760 00:05:28.651 buf 0x200000600000 len 8388608 PASSED 00:05:28.651 free 0x200000600000 8388608 00:05:28.651 unregister 0x200000400000 10485760 PASSED 00:05:28.651 passed 00:05:28.651 00:05:28.651 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.651 suites 1 1 n/a 0 0 00:05:28.651 tests 1 1 1 0 0 00:05:28.651 asserts 15 15 15 0 n/a 00:05:28.651 00:05:28.651 Elapsed time = 0.008 seconds 00:05:28.651 ************************************ 00:05:28.651 END TEST env_mem_callbacks 00:05:28.651 ************************************ 00:05:28.651 00:05:28.651 real 0m0.134s 00:05:28.651 user 0m0.014s 00:05:28.651 sys 0m0.017s 00:05:28.651 11:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.651 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.651 ************************************ 00:05:28.651 END TEST env 00:05:28.651 ************************************ 00:05:28.651 00:05:28.651 real 0m1.917s 00:05:28.651 user 0m0.909s 00:05:28.651 sys 0m0.628s 00:05:28.651 11:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.651 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.651 11:01:39 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.651 11:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.651 11:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.651 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.651 ************************************ 00:05:28.651 START TEST rpc 00:05:28.651 ************************************ 00:05:28.651 11:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.910 * Looking for test storage... 00:05:28.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.910 11:01:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.910 11:01:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.910 11:01:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.910 11:01:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.910 11:01:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.910 11:01:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.910 11:01:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.910 11:01:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.910 11:01:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.910 11:01:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.910 11:01:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.910 11:01:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.910 11:01:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.910 11:01:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.910 11:01:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.910 11:01:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.910 11:01:39 -- scripts/common.sh@344 -- # : 1 00:05:28.910 11:01:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.910 11:01:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.910 11:01:39 -- scripts/common.sh@364 -- # decimal 1 00:05:28.910 11:01:39 -- scripts/common.sh@352 -- # local d=1 00:05:28.910 11:01:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.910 11:01:39 -- scripts/common.sh@354 -- # echo 1 00:05:28.910 11:01:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.910 11:01:39 -- scripts/common.sh@365 -- # decimal 2 00:05:28.910 11:01:39 -- scripts/common.sh@352 -- # local d=2 00:05:28.910 11:01:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.910 11:01:39 -- scripts/common.sh@354 -- # echo 2 00:05:28.910 11:01:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.910 11:01:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.910 11:01:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.910 11:01:39 -- scripts/common.sh@367 -- # return 0 00:05:28.910 11:01:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.910 11:01:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.910 --rc genhtml_branch_coverage=1 00:05:28.910 --rc genhtml_function_coverage=1 00:05:28.910 --rc genhtml_legend=1 00:05:28.910 --rc geninfo_all_blocks=1 00:05:28.910 --rc geninfo_unexecuted_blocks=1 00:05:28.910 00:05:28.910 ' 00:05:28.910 11:01:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.910 --rc genhtml_branch_coverage=1 00:05:28.910 --rc genhtml_function_coverage=1 00:05:28.910 --rc genhtml_legend=1 00:05:28.910 --rc geninfo_all_blocks=1 00:05:28.910 --rc geninfo_unexecuted_blocks=1 00:05:28.911 00:05:28.911 ' 00:05:28.911 11:01:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.911 --rc genhtml_branch_coverage=1 00:05:28.911 --rc genhtml_function_coverage=1 00:05:28.911 --rc genhtml_legend=1 00:05:28.911 --rc geninfo_all_blocks=1 00:05:28.911 --rc geninfo_unexecuted_blocks=1 00:05:28.911 00:05:28.911 ' 00:05:28.911 11:01:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.911 --rc genhtml_branch_coverage=1 00:05:28.911 --rc genhtml_function_coverage=1 00:05:28.911 --rc genhtml_legend=1 00:05:28.911 --rc geninfo_all_blocks=1 00:05:28.911 --rc geninfo_unexecuted_blocks=1 00:05:28.911 00:05:28.911 ' 00:05:28.911 11:01:39 -- rpc/rpc.sh@65 -- # spdk_pid=65862 00:05:28.911 11:01:39 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:28.911 11:01:39 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.911 11:01:39 -- rpc/rpc.sh@67 -- # waitforlisten 65862 00:05:28.911 11:01:39 -- common/autotest_common.sh@829 -- # '[' -z 65862 ']' 00:05:28.911 11:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.911 11:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.911 11:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.911 11:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.911 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.911 [2024-12-06 11:01:39.999189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.911 [2024-12-06 11:01:39.999289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65862 ] 00:05:29.169 [2024-12-06 11:01:40.141476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.169 [2024-12-06 11:01:40.182527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.169 [2024-12-06 11:01:40.182721] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.169 [2024-12-06 11:01:40.182745] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65862' to capture a snapshot of events at runtime. 00:05:29.169 [2024-12-06 11:01:40.182756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65862 for offline analysis/debug. 00:05:29.169 [2024-12-06 11:01:40.182793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.105 11:01:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.105 11:01:40 -- common/autotest_common.sh@862 -- # return 0 00:05:30.105 11:01:40 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.105 11:01:40 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.105 11:01:40 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.105 11:01:40 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.105 11:01:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.105 11:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.105 11:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 ************************************ 00:05:30.105 START TEST rpc_integrity 00:05:30.105 ************************************ 00:05:30.105 11:01:41 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:30.105 11:01:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.105 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.105 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.105 11:01:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.105 11:01:41 -- rpc/rpc.sh@13 -- # jq length 00:05:30.105 11:01:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.105 11:01:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.105 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.105 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.105 11:01:41 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.105 11:01:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.105 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.105 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.105 11:01:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.105 { 00:05:30.105 "name": "Malloc0", 00:05:30.105 "aliases": [ 00:05:30.105 "dbc7bd16-93ce-4a19-9753-cf2121a3717e" 00:05:30.105 ], 00:05:30.105 "product_name": "Malloc disk", 00:05:30.105 "block_size": 512, 00:05:30.105 "num_blocks": 16384, 00:05:30.105 "uuid": "dbc7bd16-93ce-4a19-9753-cf2121a3717e", 00:05:30.105 "assigned_rate_limits": { 00:05:30.105 "rw_ios_per_sec": 0, 00:05:30.105 "rw_mbytes_per_sec": 0, 00:05:30.105 "r_mbytes_per_sec": 0, 00:05:30.105 "w_mbytes_per_sec": 0 00:05:30.105 }, 00:05:30.105 "claimed": false, 00:05:30.105 "zoned": false, 00:05:30.105 "supported_io_types": { 00:05:30.105 "read": true, 00:05:30.105 "write": true, 00:05:30.105 "unmap": true, 00:05:30.105 "write_zeroes": true, 00:05:30.105 "flush": true, 00:05:30.105 "reset": true, 00:05:30.105 "compare": false, 00:05:30.105 "compare_and_write": false, 00:05:30.105 "abort": true, 00:05:30.105 "nvme_admin": false, 00:05:30.105 "nvme_io": false 00:05:30.105 }, 00:05:30.105 "memory_domains": [ 00:05:30.105 { 00:05:30.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.105 "dma_device_type": 2 00:05:30.105 } 00:05:30.105 ], 00:05:30.105 "driver_specific": {} 00:05:30.105 } 00:05:30.105 ]' 00:05:30.105 11:01:41 -- rpc/rpc.sh@17 -- # jq length 00:05:30.105 11:01:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.105 11:01:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.105 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.105 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 [2024-12-06 11:01:41.155323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.105 [2024-12-06 11:01:41.155382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.105 [2024-12-06 11:01:41.155408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x228c030 00:05:30.105 [2024-12-06 11:01:41.155415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.105 [2024-12-06 11:01:41.156886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.105 [2024-12-06 11:01:41.156951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.105 Passthru0 00:05:30.105 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.105 11:01:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.105 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.105 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.105 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.105 11:01:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.105 { 00:05:30.105 "name": "Malloc0", 00:05:30.105 "aliases": [ 00:05:30.105 "dbc7bd16-93ce-4a19-9753-cf2121a3717e" 00:05:30.105 ], 00:05:30.105 "product_name": "Malloc disk", 00:05:30.105 "block_size": 512, 00:05:30.105 "num_blocks": 16384, 00:05:30.105 "uuid": "dbc7bd16-93ce-4a19-9753-cf2121a3717e", 00:05:30.105 "assigned_rate_limits": { 00:05:30.105 "rw_ios_per_sec": 0, 00:05:30.105 "rw_mbytes_per_sec": 0, 00:05:30.105 "r_mbytes_per_sec": 0, 00:05:30.105 "w_mbytes_per_sec": 0 00:05:30.105 }, 00:05:30.105 "claimed": true, 00:05:30.105 "claim_type": "exclusive_write", 00:05:30.105 "zoned": false, 00:05:30.105 "supported_io_types": { 00:05:30.105 "read": true, 00:05:30.105 "write": true, 00:05:30.105 "unmap": true, 00:05:30.105 "write_zeroes": true, 00:05:30.105 "flush": true, 00:05:30.105 "reset": true, 00:05:30.105 "compare": false, 00:05:30.105 "compare_and_write": false, 00:05:30.105 "abort": true, 00:05:30.105 "nvme_admin": false, 00:05:30.105 "nvme_io": false 00:05:30.105 }, 00:05:30.105 "memory_domains": [ 00:05:30.105 { 00:05:30.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.105 "dma_device_type": 2 00:05:30.105 } 00:05:30.105 ], 00:05:30.105 "driver_specific": {} 00:05:30.105 }, 00:05:30.105 { 00:05:30.105 "name": "Passthru0", 00:05:30.105 "aliases": [ 00:05:30.105 "cb87e9b8-ecd2-5bd8-84b9-5ae55ecaa468" 00:05:30.105 ], 00:05:30.105 "product_name": "passthru", 00:05:30.105 "block_size": 512, 00:05:30.105 "num_blocks": 16384, 00:05:30.105 "uuid": "cb87e9b8-ecd2-5bd8-84b9-5ae55ecaa468", 00:05:30.105 "assigned_rate_limits": { 00:05:30.105 "rw_ios_per_sec": 0, 00:05:30.105 "rw_mbytes_per_sec": 0, 00:05:30.105 "r_mbytes_per_sec": 0, 00:05:30.105 "w_mbytes_per_sec": 0 00:05:30.105 }, 00:05:30.105 "claimed": false, 00:05:30.105 "zoned": false, 00:05:30.105 "supported_io_types": { 00:05:30.105 "read": true, 00:05:30.105 "write": true, 00:05:30.105 "unmap": true, 00:05:30.105 "write_zeroes": true, 00:05:30.105 "flush": true, 00:05:30.105 "reset": true, 00:05:30.105 "compare": false, 00:05:30.105 "compare_and_write": false, 00:05:30.105 "abort": true, 00:05:30.105 "nvme_admin": false, 00:05:30.105 "nvme_io": false 00:05:30.105 }, 00:05:30.105 "memory_domains": [ 00:05:30.105 { 00:05:30.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.106 "dma_device_type": 2 00:05:30.106 } 00:05:30.106 ], 00:05:30.106 "driver_specific": { 00:05:30.106 "passthru": { 00:05:30.106 "name": "Passthru0", 00:05:30.106 "base_bdev_name": "Malloc0" 00:05:30.106 } 00:05:30.106 } 00:05:30.106 } 00:05:30.106 ]' 00:05:30.106 11:01:41 -- rpc/rpc.sh@21 -- # jq length 00:05:30.106 11:01:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.106 11:01:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.106 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.106 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.365 11:01:41 -- rpc/rpc.sh@26 -- # jq length 00:05:30.365 ************************************ 00:05:30.365 END TEST rpc_integrity 00:05:30.365 ************************************ 00:05:30.365 11:01:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.365 00:05:30.365 real 0m0.326s 00:05:30.365 user 0m0.223s 00:05:30.365 sys 0m0.036s 00:05:30.365 11:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.365 11:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.365 11:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 ************************************ 00:05:30.365 START TEST rpc_plugins 00:05:30.365 ************************************ 00:05:30.365 11:01:41 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:30.365 11:01:41 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.365 11:01:41 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.365 { 00:05:30.365 "name": "Malloc1", 00:05:30.365 "aliases": [ 00:05:30.365 "700f8b19-0809-499d-bb73-c42e5e64c04b" 00:05:30.365 ], 00:05:30.365 "product_name": "Malloc disk", 00:05:30.365 "block_size": 4096, 00:05:30.365 "num_blocks": 256, 00:05:30.365 "uuid": "700f8b19-0809-499d-bb73-c42e5e64c04b", 00:05:30.365 "assigned_rate_limits": { 00:05:30.365 "rw_ios_per_sec": 0, 00:05:30.365 "rw_mbytes_per_sec": 0, 00:05:30.365 "r_mbytes_per_sec": 0, 00:05:30.365 "w_mbytes_per_sec": 0 00:05:30.365 }, 00:05:30.365 "claimed": false, 00:05:30.365 "zoned": false, 00:05:30.365 "supported_io_types": { 00:05:30.365 "read": true, 00:05:30.365 "write": true, 00:05:30.365 "unmap": true, 00:05:30.365 "write_zeroes": true, 00:05:30.365 "flush": true, 00:05:30.365 "reset": true, 00:05:30.365 "compare": false, 00:05:30.365 "compare_and_write": false, 00:05:30.365 "abort": true, 00:05:30.365 "nvme_admin": false, 00:05:30.365 "nvme_io": false 00:05:30.365 }, 00:05:30.365 "memory_domains": [ 00:05:30.365 { 00:05:30.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.365 "dma_device_type": 2 00:05:30.365 } 00:05:30.365 ], 00:05:30.365 "driver_specific": {} 00:05:30.365 } 00:05:30.365 ]' 00:05:30.365 11:01:41 -- rpc/rpc.sh@32 -- # jq length 00:05:30.365 11:01:41 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.365 11:01:41 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.365 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.365 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.365 11:01:41 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.365 11:01:41 -- rpc/rpc.sh@36 -- # jq length 00:05:30.623 ************************************ 00:05:30.624 END TEST rpc_plugins 00:05:30.624 ************************************ 00:05:30.624 11:01:41 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.624 00:05:30.624 real 0m0.162s 00:05:30.624 user 0m0.105s 00:05:30.624 sys 0m0.019s 00:05:30.624 11:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.624 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.624 11:01:41 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.624 11:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.624 11:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.624 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.624 ************************************ 00:05:30.624 START TEST rpc_trace_cmd_test 00:05:30.624 ************************************ 00:05:30.624 11:01:41 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:30.624 11:01:41 -- rpc/rpc.sh@40 -- # local info 00:05:30.624 11:01:41 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.624 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.624 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.624 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.624 11:01:41 -- rpc/rpc.sh@42 -- # info='{ 00:05:30.624 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65862", 00:05:30.624 "tpoint_group_mask": "0x8", 00:05:30.624 "iscsi_conn": { 00:05:30.624 "mask": "0x2", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "scsi": { 00:05:30.624 "mask": "0x4", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "bdev": { 00:05:30.624 "mask": "0x8", 00:05:30.624 "tpoint_mask": "0xffffffffffffffff" 00:05:30.624 }, 00:05:30.624 "nvmf_rdma": { 00:05:30.624 "mask": "0x10", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "nvmf_tcp": { 00:05:30.624 "mask": "0x20", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "ftl": { 00:05:30.624 "mask": "0x40", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "blobfs": { 00:05:30.624 "mask": "0x80", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "dsa": { 00:05:30.624 "mask": "0x200", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "thread": { 00:05:30.624 "mask": "0x400", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "nvme_pcie": { 00:05:30.624 "mask": "0x800", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "iaa": { 00:05:30.624 "mask": "0x1000", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "nvme_tcp": { 00:05:30.624 "mask": "0x2000", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 }, 00:05:30.624 "bdev_nvme": { 00:05:30.624 "mask": "0x4000", 00:05:30.624 "tpoint_mask": "0x0" 00:05:30.624 } 00:05:30.624 }' 00:05:30.624 11:01:41 -- rpc/rpc.sh@43 -- # jq length 00:05:30.624 11:01:41 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:30.624 11:01:41 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.624 11:01:41 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.624 11:01:41 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.624 11:01:41 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.624 11:01:41 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.882 11:01:41 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.882 11:01:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.882 ************************************ 00:05:30.882 END TEST rpc_trace_cmd_test 00:05:30.882 ************************************ 00:05:30.882 11:01:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.882 00:05:30.882 real 0m0.281s 00:05:30.882 user 0m0.242s 00:05:30.882 sys 0m0.026s 00:05:30.882 11:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.882 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.882 11:01:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.882 11:01:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.882 11:01:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.883 11:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.883 11:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.883 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 ************************************ 00:05:30.883 START TEST rpc_daemon_integrity 00:05:30.883 ************************************ 00:05:30.883 11:01:41 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:30.883 11:01:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.883 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.883 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.883 11:01:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.883 11:01:41 -- rpc/rpc.sh@13 -- # jq length 00:05:30.883 11:01:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.883 11:01:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.883 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.883 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 11:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.883 11:01:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.883 11:01:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.883 11:01:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.883 11:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.883 11:01:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.883 { 00:05:30.883 "name": "Malloc2", 00:05:30.883 "aliases": [ 00:05:30.883 "d5239d7b-9b0f-4859-98b8-ce29a37d8e58" 00:05:30.883 ], 00:05:30.883 "product_name": "Malloc disk", 00:05:30.883 "block_size": 512, 00:05:30.883 "num_blocks": 16384, 00:05:30.883 "uuid": "d5239d7b-9b0f-4859-98b8-ce29a37d8e58", 00:05:30.883 "assigned_rate_limits": { 00:05:30.883 "rw_ios_per_sec": 0, 00:05:30.883 "rw_mbytes_per_sec": 0, 00:05:30.883 "r_mbytes_per_sec": 0, 00:05:30.883 "w_mbytes_per_sec": 0 00:05:30.883 }, 00:05:30.883 "claimed": false, 00:05:30.883 "zoned": false, 00:05:30.883 "supported_io_types": { 00:05:30.883 "read": true, 00:05:30.883 "write": true, 00:05:30.883 "unmap": true, 00:05:30.883 "write_zeroes": true, 00:05:30.883 "flush": true, 00:05:30.883 "reset": true, 00:05:30.883 "compare": false, 00:05:30.883 "compare_and_write": false, 00:05:30.883 "abort": true, 00:05:30.883 "nvme_admin": false, 00:05:30.883 "nvme_io": false 00:05:30.883 }, 00:05:30.883 "memory_domains": [ 00:05:30.883 { 00:05:30.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.883 "dma_device_type": 2 00:05:30.883 } 00:05:30.883 ], 00:05:30.883 "driver_specific": {} 00:05:30.883 } 00:05:30.883 ]' 00:05:30.883 11:01:42 -- rpc/rpc.sh@17 -- # jq length 00:05:31.142 11:01:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.142 11:01:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.142 11:01:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 [2024-12-06 11:01:42.071747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.142 [2024-12-06 11:01:42.071954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.142 [2024-12-06 11:01:42.072008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x228c9d0 00:05:31.142 [2024-12-06 11:01:42.072018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.142 [2024-12-06 11:01:42.073220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.142 [2024-12-06 11:01:42.073254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.142 Passthru0 00:05:31.142 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.142 11:01:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.142 11:01:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.142 11:01:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.142 { 00:05:31.142 "name": "Malloc2", 00:05:31.142 "aliases": [ 00:05:31.142 "d5239d7b-9b0f-4859-98b8-ce29a37d8e58" 00:05:31.142 ], 00:05:31.142 "product_name": "Malloc disk", 00:05:31.142 "block_size": 512, 00:05:31.142 "num_blocks": 16384, 00:05:31.142 "uuid": "d5239d7b-9b0f-4859-98b8-ce29a37d8e58", 00:05:31.142 "assigned_rate_limits": { 00:05:31.142 "rw_ios_per_sec": 0, 00:05:31.142 "rw_mbytes_per_sec": 0, 00:05:31.142 "r_mbytes_per_sec": 0, 00:05:31.142 "w_mbytes_per_sec": 0 00:05:31.142 }, 00:05:31.142 "claimed": true, 00:05:31.142 "claim_type": "exclusive_write", 00:05:31.142 "zoned": false, 00:05:31.142 "supported_io_types": { 00:05:31.142 "read": true, 00:05:31.142 "write": true, 00:05:31.142 "unmap": true, 00:05:31.142 "write_zeroes": true, 00:05:31.142 "flush": true, 00:05:31.142 "reset": true, 00:05:31.142 "compare": false, 00:05:31.142 "compare_and_write": false, 00:05:31.142 "abort": true, 00:05:31.142 "nvme_admin": false, 00:05:31.142 "nvme_io": false 00:05:31.142 }, 00:05:31.142 "memory_domains": [ 00:05:31.142 { 00:05:31.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.142 "dma_device_type": 2 00:05:31.142 } 00:05:31.142 ], 00:05:31.142 "driver_specific": {} 00:05:31.142 }, 00:05:31.142 { 00:05:31.142 "name": "Passthru0", 00:05:31.142 "aliases": [ 00:05:31.142 "4fdb8d36-8b7c-5b2b-94a8-1a4da2667aa6" 00:05:31.142 ], 00:05:31.142 "product_name": "passthru", 00:05:31.142 "block_size": 512, 00:05:31.142 "num_blocks": 16384, 00:05:31.142 "uuid": "4fdb8d36-8b7c-5b2b-94a8-1a4da2667aa6", 00:05:31.142 "assigned_rate_limits": { 00:05:31.142 "rw_ios_per_sec": 0, 00:05:31.142 "rw_mbytes_per_sec": 0, 00:05:31.142 "r_mbytes_per_sec": 0, 00:05:31.142 "w_mbytes_per_sec": 0 00:05:31.142 }, 00:05:31.142 "claimed": false, 00:05:31.142 "zoned": false, 00:05:31.142 "supported_io_types": { 00:05:31.142 "read": true, 00:05:31.142 "write": true, 00:05:31.142 "unmap": true, 00:05:31.142 "write_zeroes": true, 00:05:31.142 "flush": true, 00:05:31.142 "reset": true, 00:05:31.142 "compare": false, 00:05:31.142 "compare_and_write": false, 00:05:31.142 "abort": true, 00:05:31.142 "nvme_admin": false, 00:05:31.142 "nvme_io": false 00:05:31.142 }, 00:05:31.142 "memory_domains": [ 00:05:31.142 { 00:05:31.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.142 "dma_device_type": 2 00:05:31.142 } 00:05:31.142 ], 00:05:31.142 "driver_specific": { 00:05:31.142 "passthru": { 00:05:31.142 "name": "Passthru0", 00:05:31.142 "base_bdev_name": "Malloc2" 00:05:31.142 } 00:05:31.142 } 00:05:31.142 } 00:05:31.142 ]' 00:05:31.142 11:01:42 -- rpc/rpc.sh@21 -- # jq length 00:05:31.142 11:01:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.142 11:01:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.142 11:01:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.142 11:01:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.142 11:01:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.142 11:01:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.142 11:01:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 11:01:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.142 11:01:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.142 11:01:42 -- rpc/rpc.sh@26 -- # jq length 00:05:31.142 ************************************ 00:05:31.142 END TEST rpc_daemon_integrity 00:05:31.142 ************************************ 00:05:31.142 11:01:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.142 00:05:31.142 real 0m0.318s 00:05:31.142 user 0m0.211s 00:05:31.142 sys 0m0.039s 00:05:31.142 11:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.142 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.142 11:01:42 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.142 11:01:42 -- rpc/rpc.sh@84 -- # killprocess 65862 00:05:31.142 11:01:42 -- common/autotest_common.sh@936 -- # '[' -z 65862 ']' 00:05:31.142 11:01:42 -- common/autotest_common.sh@940 -- # kill -0 65862 00:05:31.142 11:01:42 -- common/autotest_common.sh@941 -- # uname 00:05:31.401 11:01:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.401 11:01:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65862 00:05:31.401 killing process with pid 65862 00:05:31.401 11:01:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.401 11:01:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.402 11:01:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65862' 00:05:31.402 11:01:42 -- common/autotest_common.sh@955 -- # kill 65862 00:05:31.402 11:01:42 -- common/autotest_common.sh@960 -- # wait 65862 00:05:31.660 ************************************ 00:05:31.660 END TEST rpc 00:05:31.660 ************************************ 00:05:31.660 00:05:31.660 real 0m2.796s 00:05:31.660 user 0m3.754s 00:05:31.660 sys 0m0.574s 00:05:31.660 11:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.660 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.660 11:01:42 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.660 11:01:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.660 11:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.660 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.661 ************************************ 00:05:31.661 START TEST rpc_client 00:05:31.661 ************************************ 00:05:31.661 11:01:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.661 * Looking for test storage... 00:05:31.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.661 11:01:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.661 11:01:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.661 11:01:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.661 11:01:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.661 11:01:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.661 11:01:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.661 11:01:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.661 11:01:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.661 11:01:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.661 11:01:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.661 11:01:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.661 11:01:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.661 11:01:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.661 11:01:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.661 11:01:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.661 11:01:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.661 11:01:42 -- scripts/common.sh@344 -- # : 1 00:05:31.661 11:01:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.661 11:01:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.661 11:01:42 -- scripts/common.sh@364 -- # decimal 1 00:05:31.661 11:01:42 -- scripts/common.sh@352 -- # local d=1 00:05:31.661 11:01:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.661 11:01:42 -- scripts/common.sh@354 -- # echo 1 00:05:31.661 11:01:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.661 11:01:42 -- scripts/common.sh@365 -- # decimal 2 00:05:31.661 11:01:42 -- scripts/common.sh@352 -- # local d=2 00:05:31.661 11:01:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.661 11:01:42 -- scripts/common.sh@354 -- # echo 2 00:05:31.661 11:01:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.661 11:01:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.661 11:01:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.661 11:01:42 -- scripts/common.sh@367 -- # return 0 00:05:31.661 11:01:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.661 11:01:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.661 --rc genhtml_branch_coverage=1 00:05:31.661 --rc genhtml_function_coverage=1 00:05:31.661 --rc genhtml_legend=1 00:05:31.661 --rc geninfo_all_blocks=1 00:05:31.661 --rc geninfo_unexecuted_blocks=1 00:05:31.661 00:05:31.661 ' 00:05:31.661 11:01:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.661 --rc genhtml_branch_coverage=1 00:05:31.661 --rc genhtml_function_coverage=1 00:05:31.661 --rc genhtml_legend=1 00:05:31.661 --rc geninfo_all_blocks=1 00:05:31.661 --rc geninfo_unexecuted_blocks=1 00:05:31.661 00:05:31.661 ' 00:05:31.661 11:01:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.661 --rc genhtml_branch_coverage=1 00:05:31.661 --rc genhtml_function_coverage=1 00:05:31.661 --rc genhtml_legend=1 00:05:31.661 --rc geninfo_all_blocks=1 00:05:31.661 --rc geninfo_unexecuted_blocks=1 00:05:31.661 00:05:31.661 ' 00:05:31.661 11:01:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.661 --rc genhtml_branch_coverage=1 00:05:31.661 --rc genhtml_function_coverage=1 00:05:31.661 --rc genhtml_legend=1 00:05:31.661 --rc geninfo_all_blocks=1 00:05:31.661 --rc geninfo_unexecuted_blocks=1 00:05:31.661 00:05:31.661 ' 00:05:31.661 11:01:42 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.661 OK 00:05:31.920 11:01:42 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.920 00:05:31.920 real 0m0.209s 00:05:31.920 user 0m0.128s 00:05:31.920 sys 0m0.086s 00:05:31.920 11:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.920 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.920 ************************************ 00:05:31.920 END TEST rpc_client 00:05:31.920 ************************************ 00:05:31.920 11:01:42 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.920 11:01:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.920 11:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.920 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.920 ************************************ 00:05:31.920 START TEST json_config 00:05:31.920 ************************************ 00:05:31.920 11:01:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.920 11:01:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.920 11:01:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.920 11:01:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.920 11:01:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.920 11:01:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.920 11:01:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.920 11:01:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.920 11:01:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.920 11:01:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.920 11:01:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.920 11:01:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.920 11:01:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.920 11:01:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.920 11:01:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.920 11:01:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.920 11:01:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.920 11:01:43 -- scripts/common.sh@344 -- # : 1 00:05:31.920 11:01:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.920 11:01:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.920 11:01:43 -- scripts/common.sh@364 -- # decimal 1 00:05:31.920 11:01:43 -- scripts/common.sh@352 -- # local d=1 00:05:31.920 11:01:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.920 11:01:43 -- scripts/common.sh@354 -- # echo 1 00:05:31.920 11:01:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.920 11:01:43 -- scripts/common.sh@365 -- # decimal 2 00:05:31.920 11:01:43 -- scripts/common.sh@352 -- # local d=2 00:05:31.920 11:01:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.920 11:01:43 -- scripts/common.sh@354 -- # echo 2 00:05:31.920 11:01:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.920 11:01:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.920 11:01:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.920 11:01:43 -- scripts/common.sh@367 -- # return 0 00:05:31.920 11:01:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.920 11:01:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.920 --rc genhtml_branch_coverage=1 00:05:31.920 --rc genhtml_function_coverage=1 00:05:31.920 --rc genhtml_legend=1 00:05:31.920 --rc geninfo_all_blocks=1 00:05:31.920 --rc geninfo_unexecuted_blocks=1 00:05:31.920 00:05:31.920 ' 00:05:31.920 11:01:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.921 --rc genhtml_branch_coverage=1 00:05:31.921 --rc genhtml_function_coverage=1 00:05:31.921 --rc genhtml_legend=1 00:05:31.921 --rc geninfo_all_blocks=1 00:05:31.921 --rc geninfo_unexecuted_blocks=1 00:05:31.921 00:05:31.921 ' 00:05:31.921 11:01:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.921 --rc genhtml_branch_coverage=1 00:05:31.921 --rc genhtml_function_coverage=1 00:05:31.921 --rc genhtml_legend=1 00:05:31.921 --rc geninfo_all_blocks=1 00:05:31.921 --rc geninfo_unexecuted_blocks=1 00:05:31.921 00:05:31.921 ' 00:05:31.921 11:01:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.921 --rc genhtml_branch_coverage=1 00:05:31.921 --rc genhtml_function_coverage=1 00:05:31.921 --rc genhtml_legend=1 00:05:31.921 --rc geninfo_all_blocks=1 00:05:31.921 --rc geninfo_unexecuted_blocks=1 00:05:31.921 00:05:31.921 ' 00:05:31.921 11:01:43 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.921 11:01:43 -- nvmf/common.sh@7 -- # uname -s 00:05:31.921 11:01:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.921 11:01:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.921 11:01:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.921 11:01:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.921 11:01:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.921 11:01:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.921 11:01:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.921 11:01:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.921 11:01:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.921 11:01:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.921 11:01:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:05:31.921 11:01:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:05:31.921 11:01:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.921 11:01:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.921 11:01:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.921 11:01:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.921 11:01:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.921 11:01:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.921 11:01:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.921 11:01:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.921 11:01:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.921 11:01:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.921 11:01:43 -- paths/export.sh@5 -- # export PATH 00:05:31.921 11:01:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.921 11:01:43 -- nvmf/common.sh@46 -- # : 0 00:05:31.921 11:01:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:31.921 11:01:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:31.921 11:01:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:31.921 11:01:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.921 11:01:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.921 11:01:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:31.921 11:01:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:31.921 11:01:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:31.921 11:01:43 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.921 11:01:43 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.921 11:01:43 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:31.921 11:01:43 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.921 11:01:43 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:31.921 11:01:43 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.921 11:01:43 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:31.921 11:01:43 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:31.921 11:01:43 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:31.921 11:01:43 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:31.921 11:01:43 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.921 INFO: JSON configuration test init 00:05:31.921 11:01:43 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:31.921 11:01:43 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:31.921 11:01:43 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:31.921 11:01:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.921 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.921 11:01:43 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:31.921 11:01:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.921 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.921 11:01:43 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.921 Waiting for target to run... 00:05:31.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.921 11:01:43 -- json_config/json_config.sh@98 -- # local app=target 00:05:31.921 11:01:43 -- json_config/json_config.sh@99 -- # shift 00:05:31.921 11:01:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:31.921 11:01:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.921 11:01:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=66115 00:05:31.921 11:01:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:31.921 11:01:43 -- json_config/json_config.sh@114 -- # waitforlisten 66115 /var/tmp/spdk_tgt.sock 00:05:31.921 11:01:43 -- common/autotest_common.sh@829 -- # '[' -z 66115 ']' 00:05:31.921 11:01:43 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.921 11:01:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.921 11:01:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.921 11:01:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.921 11:01:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.922 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.180 [2024-12-06 11:01:43.123002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.180 [2024-12-06 11:01:43.123314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66115 ] 00:05:32.438 [2024-12-06 11:01:43.432288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.438 [2024-12-06 11:01:43.457002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.438 [2024-12-06 11:01:43.457449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.005 11:01:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.005 11:01:44 -- common/autotest_common.sh@862 -- # return 0 00:05:33.005 00:05:33.005 11:01:44 -- json_config/json_config.sh@115 -- # echo '' 00:05:33.005 11:01:44 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:33.005 11:01:44 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:33.005 11:01:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.005 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:33.005 11:01:44 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:33.005 11:01:44 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:33.005 11:01:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.005 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:33.263 11:01:44 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.263 11:01:44 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:33.263 11:01:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.522 11:01:44 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:33.522 11:01:44 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:33.522 11:01:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.522 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:33.522 11:01:44 -- json_config/json_config.sh@48 -- # local ret=0 00:05:33.522 11:01:44 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.522 11:01:44 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:33.522 11:01:44 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:33.522 11:01:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.522 11:01:44 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:34.089 11:01:44 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:34.089 11:01:44 -- json_config/json_config.sh@51 -- # local get_types 00:05:34.089 11:01:44 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:34.089 11:01:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.089 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 11:01:44 -- json_config/json_config.sh@58 -- # return 0 00:05:34.089 11:01:44 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:34.089 11:01:44 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:34.089 11:01:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.089 11:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:34.089 11:01:44 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.089 11:01:44 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:34.089 11:01:44 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.089 11:01:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.347 MallocForNvmf0 00:05:34.347 11:01:45 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.347 11:01:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.606 MallocForNvmf1 00:05:34.606 11:01:45 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.606 11:01:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.606 [2024-12-06 11:01:45.728830] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.606 11:01:45 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.606 11:01:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.864 11:01:45 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.864 11:01:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.123 11:01:46 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.123 11:01:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.381 11:01:46 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.382 11:01:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.640 [2024-12-06 11:01:46.637323] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.640 11:01:46 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:35.640 11:01:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.640 11:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.640 11:01:46 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:35.640 11:01:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.640 11:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.640 11:01:46 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:35.640 11:01:46 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.640 11:01:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.899 MallocBdevForConfigChangeCheck 00:05:35.899 11:01:47 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:35.899 11:01:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.899 11:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.158 11:01:47 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:36.158 11:01:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.417 INFO: shutting down applications... 00:05:36.417 11:01:47 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:36.417 11:01:47 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:36.417 11:01:47 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:36.417 11:01:47 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:36.417 11:01:47 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.676 Calling clear_iscsi_subsystem 00:05:36.676 Calling clear_nvmf_subsystem 00:05:36.676 Calling clear_nbd_subsystem 00:05:36.676 Calling clear_ublk_subsystem 00:05:36.676 Calling clear_vhost_blk_subsystem 00:05:36.676 Calling clear_vhost_scsi_subsystem 00:05:36.676 Calling clear_scheduler_subsystem 00:05:36.676 Calling clear_bdev_subsystem 00:05:36.676 Calling clear_accel_subsystem 00:05:36.676 Calling clear_vmd_subsystem 00:05:36.676 Calling clear_sock_subsystem 00:05:36.676 Calling clear_iobuf_subsystem 00:05:36.676 11:01:47 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:36.676 11:01:47 -- json_config/json_config.sh@396 -- # count=100 00:05:36.676 11:01:47 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:36.676 11:01:47 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.676 11:01:47 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.676 11:01:47 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:37.245 11:01:48 -- json_config/json_config.sh@398 -- # break 00:05:37.245 11:01:48 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:37.245 11:01:48 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:37.245 11:01:48 -- json_config/json_config.sh@120 -- # local app=target 00:05:37.245 11:01:48 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:37.245 11:01:48 -- json_config/json_config.sh@124 -- # [[ -n 66115 ]] 00:05:37.245 11:01:48 -- json_config/json_config.sh@127 -- # kill -SIGINT 66115 00:05:37.245 11:01:48 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:37.245 11:01:48 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:37.245 11:01:48 -- json_config/json_config.sh@130 -- # kill -0 66115 00:05:37.245 11:01:48 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:37.813 11:01:48 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:37.813 SPDK target shutdown done 00:05:37.813 INFO: relaunching applications... 00:05:37.813 11:01:48 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:37.813 11:01:48 -- json_config/json_config.sh@130 -- # kill -0 66115 00:05:37.813 11:01:48 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:37.813 11:01:48 -- json_config/json_config.sh@132 -- # break 00:05:37.813 11:01:48 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:37.813 11:01:48 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:37.813 11:01:48 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:37.813 11:01:48 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.813 11:01:48 -- json_config/json_config.sh@98 -- # local app=target 00:05:37.813 11:01:48 -- json_config/json_config.sh@99 -- # shift 00:05:37.813 11:01:48 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:37.813 11:01:48 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:37.813 11:01:48 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:37.813 11:01:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:37.813 11:01:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:37.813 11:01:48 -- json_config/json_config.sh@111 -- # app_pid[$app]=66300 00:05:37.813 11:01:48 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.813 Waiting for target to run... 00:05:37.813 11:01:48 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:37.813 11:01:48 -- json_config/json_config.sh@114 -- # waitforlisten 66300 /var/tmp/spdk_tgt.sock 00:05:37.813 11:01:48 -- common/autotest_common.sh@829 -- # '[' -z 66300 ']' 00:05:37.813 11:01:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.813 11:01:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.813 11:01:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.813 11:01:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.813 11:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.813 [2024-12-06 11:01:48.719945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.813 [2024-12-06 11:01:48.720093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66300 ] 00:05:38.073 [2024-12-06 11:01:49.017061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.073 [2024-12-06 11:01:49.037162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.073 [2024-12-06 11:01:49.037305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.332 [2024-12-06 11:01:49.327482] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.332 [2024-12-06 11:01:49.359569] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.592 00:05:38.592 INFO: Checking if target configuration is the same... 00:05:38.592 11:01:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.592 11:01:49 -- common/autotest_common.sh@862 -- # return 0 00:05:38.592 11:01:49 -- json_config/json_config.sh@115 -- # echo '' 00:05:38.592 11:01:49 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:38.592 11:01:49 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.592 11:01:49 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.592 11:01:49 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:38.592 11:01:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.592 + '[' 2 -ne 2 ']' 00:05:38.592 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:38.592 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:38.592 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.592 +++ basename /dev/fd/62 00:05:38.592 ++ mktemp /tmp/62.XXX 00:05:38.592 + tmp_file_1=/tmp/62.cqZ 00:05:38.592 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.592 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.592 + tmp_file_2=/tmp/spdk_tgt_config.json.3QR 00:05:38.592 + ret=0 00:05:38.592 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.851 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.111 + diff -u /tmp/62.cqZ /tmp/spdk_tgt_config.json.3QR 00:05:39.111 INFO: JSON config files are the same 00:05:39.111 + echo 'INFO: JSON config files are the same' 00:05:39.111 + rm /tmp/62.cqZ /tmp/spdk_tgt_config.json.3QR 00:05:39.111 + exit 0 00:05:39.111 INFO: changing configuration and checking if this can be detected... 00:05:39.111 11:01:50 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:39.111 11:01:50 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.111 11:01:50 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.111 11:01:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.370 11:01:50 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.370 11:01:50 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:39.370 11:01:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.370 + '[' 2 -ne 2 ']' 00:05:39.370 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:39.370 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:39.370 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:39.370 +++ basename /dev/fd/62 00:05:39.370 ++ mktemp /tmp/62.XXX 00:05:39.370 + tmp_file_1=/tmp/62.Mxy 00:05:39.370 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.370 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.370 + tmp_file_2=/tmp/spdk_tgt_config.json.iPu 00:05:39.370 + ret=0 00:05:39.370 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.637 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.637 + diff -u /tmp/62.Mxy /tmp/spdk_tgt_config.json.iPu 00:05:39.637 + ret=1 00:05:39.637 + echo '=== Start of file: /tmp/62.Mxy ===' 00:05:39.637 + cat /tmp/62.Mxy 00:05:39.637 + echo '=== End of file: /tmp/62.Mxy ===' 00:05:39.637 + echo '' 00:05:39.637 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iPu ===' 00:05:39.637 + cat /tmp/spdk_tgt_config.json.iPu 00:05:39.637 + echo '=== End of file: /tmp/spdk_tgt_config.json.iPu ===' 00:05:39.637 + echo '' 00:05:39.637 + rm /tmp/62.Mxy /tmp/spdk_tgt_config.json.iPu 00:05:39.637 + exit 1 00:05:39.637 INFO: configuration change detected. 00:05:39.637 11:01:50 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:39.637 11:01:50 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:39.637 11:01:50 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:39.637 11:01:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.637 11:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.637 11:01:50 -- json_config/json_config.sh@360 -- # local ret=0 00:05:39.637 11:01:50 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:39.637 11:01:50 -- json_config/json_config.sh@370 -- # [[ -n 66300 ]] 00:05:39.637 11:01:50 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:39.637 11:01:50 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.637 11:01:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.637 11:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.637 11:01:50 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:39.637 11:01:50 -- json_config/json_config.sh@246 -- # uname -s 00:05:39.959 11:01:50 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:39.959 11:01:50 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:39.959 11:01:50 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:39.959 11:01:50 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.959 11:01:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.959 11:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.960 11:01:50 -- json_config/json_config.sh@376 -- # killprocess 66300 00:05:39.960 11:01:50 -- common/autotest_common.sh@936 -- # '[' -z 66300 ']' 00:05:39.960 11:01:50 -- common/autotest_common.sh@940 -- # kill -0 66300 00:05:39.960 11:01:50 -- common/autotest_common.sh@941 -- # uname 00:05:39.960 11:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.960 11:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66300 00:05:39.960 killing process with pid 66300 00:05:39.960 11:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.960 11:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.960 11:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66300' 00:05:39.960 11:01:50 -- common/autotest_common.sh@955 -- # kill 66300 00:05:39.960 11:01:50 -- common/autotest_common.sh@960 -- # wait 66300 00:05:39.960 11:01:50 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.960 11:01:50 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:39.960 11:01:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.960 11:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.960 INFO: Success 00:05:39.960 11:01:51 -- json_config/json_config.sh@381 -- # return 0 00:05:39.960 11:01:51 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:39.960 ************************************ 00:05:39.960 END TEST json_config 00:05:39.960 ************************************ 00:05:39.960 00:05:39.960 real 0m8.173s 00:05:39.960 user 0m11.834s 00:05:39.960 sys 0m1.420s 00:05:39.960 11:01:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.960 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.253 11:01:51 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.253 11:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.253 11:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.253 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.253 ************************************ 00:05:40.253 START TEST json_config_extra_key 00:05:40.253 ************************************ 00:05:40.253 11:01:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.253 11:01:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.253 11:01:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.253 11:01:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.253 11:01:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.253 11:01:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.253 11:01:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.253 11:01:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.253 11:01:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.253 11:01:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.253 11:01:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.253 11:01:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.253 11:01:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.253 11:01:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.253 11:01:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.253 11:01:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.253 11:01:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.253 11:01:51 -- scripts/common.sh@344 -- # : 1 00:05:40.253 11:01:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.253 11:01:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.253 11:01:51 -- scripts/common.sh@364 -- # decimal 1 00:05:40.253 11:01:51 -- scripts/common.sh@352 -- # local d=1 00:05:40.253 11:01:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.253 11:01:51 -- scripts/common.sh@354 -- # echo 1 00:05:40.253 11:01:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.253 11:01:51 -- scripts/common.sh@365 -- # decimal 2 00:05:40.254 11:01:51 -- scripts/common.sh@352 -- # local d=2 00:05:40.254 11:01:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.254 11:01:51 -- scripts/common.sh@354 -- # echo 2 00:05:40.254 11:01:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.254 11:01:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.254 11:01:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.254 11:01:51 -- scripts/common.sh@367 -- # return 0 00:05:40.254 11:01:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.254 11:01:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.254 --rc genhtml_branch_coverage=1 00:05:40.254 --rc genhtml_function_coverage=1 00:05:40.254 --rc genhtml_legend=1 00:05:40.254 --rc geninfo_all_blocks=1 00:05:40.254 --rc geninfo_unexecuted_blocks=1 00:05:40.254 00:05:40.254 ' 00:05:40.254 11:01:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.254 --rc genhtml_branch_coverage=1 00:05:40.254 --rc genhtml_function_coverage=1 00:05:40.254 --rc genhtml_legend=1 00:05:40.254 --rc geninfo_all_blocks=1 00:05:40.254 --rc geninfo_unexecuted_blocks=1 00:05:40.254 00:05:40.254 ' 00:05:40.254 11:01:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.254 --rc genhtml_branch_coverage=1 00:05:40.254 --rc genhtml_function_coverage=1 00:05:40.254 --rc genhtml_legend=1 00:05:40.254 --rc geninfo_all_blocks=1 00:05:40.254 --rc geninfo_unexecuted_blocks=1 00:05:40.254 00:05:40.254 ' 00:05:40.254 11:01:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.254 --rc genhtml_branch_coverage=1 00:05:40.254 --rc genhtml_function_coverage=1 00:05:40.254 --rc genhtml_legend=1 00:05:40.254 --rc geninfo_all_blocks=1 00:05:40.254 --rc geninfo_unexecuted_blocks=1 00:05:40.254 00:05:40.254 ' 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.254 11:01:51 -- nvmf/common.sh@7 -- # uname -s 00:05:40.254 11:01:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.254 11:01:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.254 11:01:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.254 11:01:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.254 11:01:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.254 11:01:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.254 11:01:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.254 11:01:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.254 11:01:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.254 11:01:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.254 11:01:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:05:40.254 11:01:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:05:40.254 11:01:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.254 11:01:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.254 11:01:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.254 11:01:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.254 11:01:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.254 11:01:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.254 11:01:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.254 11:01:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.254 11:01:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.254 11:01:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.254 11:01:51 -- paths/export.sh@5 -- # export PATH 00:05:40.254 11:01:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.254 11:01:51 -- nvmf/common.sh@46 -- # : 0 00:05:40.254 11:01:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:40.254 11:01:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:40.254 11:01:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:40.254 11:01:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.254 11:01:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.254 11:01:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:40.254 11:01:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:40.254 11:01:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:40.254 INFO: launching applications... 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:40.254 Waiting for target to run... 00:05:40.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66453 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66453 /var/tmp/spdk_tgt.sock 00:05:40.254 11:01:51 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.254 11:01:51 -- common/autotest_common.sh@829 -- # '[' -z 66453 ']' 00:05:40.254 11:01:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.254 11:01:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.254 11:01:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.254 11:01:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.254 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.254 [2024-12-06 11:01:51.326610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.254 [2024-12-06 11:01:51.326717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66453 ] 00:05:40.513 [2024-12-06 11:01:51.633703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.513 [2024-12-06 11:01:51.652660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.513 [2024-12-06 11:01:51.652822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.451 00:05:41.451 INFO: shutting down applications... 00:05:41.451 11:01:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.451 11:01:52 -- common/autotest_common.sh@862 -- # return 0 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66453 ]] 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66453 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66453 00:05:41.451 11:01:52 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66453 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:41.710 SPDK target shutdown done 00:05:41.710 Success 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:41.710 11:01:52 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:41.710 00:05:41.710 real 0m1.748s 00:05:41.710 user 0m1.568s 00:05:41.710 sys 0m0.321s 00:05:41.710 ************************************ 00:05:41.710 END TEST json_config_extra_key 00:05:41.710 ************************************ 00:05:41.710 11:01:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.710 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.969 11:01:52 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.969 11:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.969 11:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.969 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.969 ************************************ 00:05:41.969 START TEST alias_rpc 00:05:41.969 ************************************ 00:05:41.969 11:01:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.969 * Looking for test storage... 00:05:41.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:41.969 11:01:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.969 11:01:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.969 11:01:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.969 11:01:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.969 11:01:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.969 11:01:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.969 11:01:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.969 11:01:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.969 11:01:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.969 11:01:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.969 11:01:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.969 11:01:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.969 11:01:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.969 11:01:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.969 11:01:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.969 11:01:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.969 11:01:53 -- scripts/common.sh@344 -- # : 1 00:05:41.969 11:01:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.969 11:01:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.969 11:01:53 -- scripts/common.sh@364 -- # decimal 1 00:05:41.969 11:01:53 -- scripts/common.sh@352 -- # local d=1 00:05:41.970 11:01:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.970 11:01:53 -- scripts/common.sh@354 -- # echo 1 00:05:41.970 11:01:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.970 11:01:53 -- scripts/common.sh@365 -- # decimal 2 00:05:41.970 11:01:53 -- scripts/common.sh@352 -- # local d=2 00:05:41.970 11:01:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.970 11:01:53 -- scripts/common.sh@354 -- # echo 2 00:05:41.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.970 11:01:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.970 11:01:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.970 11:01:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.970 11:01:53 -- scripts/common.sh@367 -- # return 0 00:05:41.970 11:01:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.970 11:01:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 11:01:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 11:01:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 11:01:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.970 --rc genhtml_branch_coverage=1 00:05:41.970 --rc genhtml_function_coverage=1 00:05:41.970 --rc genhtml_legend=1 00:05:41.970 --rc geninfo_all_blocks=1 00:05:41.970 --rc geninfo_unexecuted_blocks=1 00:05:41.970 00:05:41.970 ' 00:05:41.970 11:01:53 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.970 11:01:53 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66519 00:05:41.970 11:01:53 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66519 00:05:41.970 11:01:53 -- common/autotest_common.sh@829 -- # '[' -z 66519 ']' 00:05:41.970 11:01:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.970 11:01:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.970 11:01:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.970 11:01:53 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.970 11:01:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.970 11:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.229 [2024-12-06 11:01:53.125520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.229 [2024-12-06 11:01:53.125641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66519 ] 00:05:42.229 [2024-12-06 11:01:53.264664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.229 [2024-12-06 11:01:53.294789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.229 [2024-12-06 11:01:53.294959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.166 11:01:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.166 11:01:54 -- common/autotest_common.sh@862 -- # return 0 00:05:43.166 11:01:54 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:43.424 11:01:54 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66519 00:05:43.424 11:01:54 -- common/autotest_common.sh@936 -- # '[' -z 66519 ']' 00:05:43.424 11:01:54 -- common/autotest_common.sh@940 -- # kill -0 66519 00:05:43.424 11:01:54 -- common/autotest_common.sh@941 -- # uname 00:05:43.424 11:01:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.424 11:01:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66519 00:05:43.424 killing process with pid 66519 00:05:43.424 11:01:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.424 11:01:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.424 11:01:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66519' 00:05:43.424 11:01:54 -- common/autotest_common.sh@955 -- # kill 66519 00:05:43.424 11:01:54 -- common/autotest_common.sh@960 -- # wait 66519 00:05:43.684 ************************************ 00:05:43.684 END TEST alias_rpc 00:05:43.684 ************************************ 00:05:43.684 00:05:43.684 real 0m1.714s 00:05:43.684 user 0m2.045s 00:05:43.684 sys 0m0.332s 00:05:43.684 11:01:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.684 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.684 11:01:54 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:43.684 11:01:54 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.684 11:01:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.684 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.684 ************************************ 00:05:43.684 START TEST spdkcli_tcp 00:05:43.684 ************************************ 00:05:43.684 11:01:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.684 * Looking for test storage... 00:05:43.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:43.684 11:01:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.684 11:01:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.684 11:01:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.684 11:01:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.684 11:01:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.684 11:01:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.684 11:01:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.684 11:01:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.684 11:01:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.684 11:01:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.684 11:01:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.684 11:01:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.684 11:01:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.684 11:01:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.684 11:01:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.684 11:01:54 -- scripts/common.sh@344 -- # : 1 00:05:43.684 11:01:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.684 11:01:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.684 11:01:54 -- scripts/common.sh@364 -- # decimal 1 00:05:43.684 11:01:54 -- scripts/common.sh@352 -- # local d=1 00:05:43.684 11:01:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.684 11:01:54 -- scripts/common.sh@354 -- # echo 1 00:05:43.684 11:01:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.684 11:01:54 -- scripts/common.sh@365 -- # decimal 2 00:05:43.684 11:01:54 -- scripts/common.sh@352 -- # local d=2 00:05:43.684 11:01:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.684 11:01:54 -- scripts/common.sh@354 -- # echo 2 00:05:43.684 11:01:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.684 11:01:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.684 11:01:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.684 11:01:54 -- scripts/common.sh@367 -- # return 0 00:05:43.684 11:01:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.684 --rc genhtml_branch_coverage=1 00:05:43.684 --rc genhtml_function_coverage=1 00:05:43.684 --rc genhtml_legend=1 00:05:43.684 --rc geninfo_all_blocks=1 00:05:43.684 --rc geninfo_unexecuted_blocks=1 00:05:43.684 00:05:43.684 ' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.684 --rc genhtml_branch_coverage=1 00:05:43.684 --rc genhtml_function_coverage=1 00:05:43.684 --rc genhtml_legend=1 00:05:43.684 --rc geninfo_all_blocks=1 00:05:43.684 --rc geninfo_unexecuted_blocks=1 00:05:43.684 00:05:43.684 ' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.684 --rc genhtml_branch_coverage=1 00:05:43.684 --rc genhtml_function_coverage=1 00:05:43.684 --rc genhtml_legend=1 00:05:43.684 --rc geninfo_all_blocks=1 00:05:43.684 --rc geninfo_unexecuted_blocks=1 00:05:43.684 00:05:43.684 ' 00:05:43.684 11:01:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.684 --rc genhtml_branch_coverage=1 00:05:43.684 --rc genhtml_function_coverage=1 00:05:43.684 --rc genhtml_legend=1 00:05:43.684 --rc geninfo_all_blocks=1 00:05:43.684 --rc geninfo_unexecuted_blocks=1 00:05:43.684 00:05:43.684 ' 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:43.684 11:01:54 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:43.684 11:01:54 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.684 11:01:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.684 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.684 11:01:54 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66602 00:05:43.685 11:01:54 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.685 11:01:54 -- spdkcli/tcp.sh@27 -- # waitforlisten 66602 00:05:43.685 11:01:54 -- common/autotest_common.sh@829 -- # '[' -z 66602 ']' 00:05:43.685 11:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.685 11:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.685 11:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.685 11:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.685 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.944 [2024-12-06 11:01:54.865486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.944 [2024-12-06 11:01:54.865606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66602 ] 00:05:43.944 [2024-12-06 11:01:54.992932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.944 [2024-12-06 11:01:55.023900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.944 [2024-12-06 11:01:55.024444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.944 [2024-12-06 11:01:55.024435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.882 11:01:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.882 11:01:55 -- common/autotest_common.sh@862 -- # return 0 00:05:44.882 11:01:55 -- spdkcli/tcp.sh@31 -- # socat_pid=66619 00:05:44.882 11:01:55 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.882 11:01:55 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.141 [ 00:05:45.141 "bdev_malloc_delete", 00:05:45.141 "bdev_malloc_create", 00:05:45.141 "bdev_null_resize", 00:05:45.141 "bdev_null_delete", 00:05:45.141 "bdev_null_create", 00:05:45.141 "bdev_nvme_cuse_unregister", 00:05:45.141 "bdev_nvme_cuse_register", 00:05:45.141 "bdev_opal_new_user", 00:05:45.141 "bdev_opal_set_lock_state", 00:05:45.141 "bdev_opal_delete", 00:05:45.141 "bdev_opal_get_info", 00:05:45.141 "bdev_opal_create", 00:05:45.141 "bdev_nvme_opal_revert", 00:05:45.141 "bdev_nvme_opal_init", 00:05:45.141 "bdev_nvme_send_cmd", 00:05:45.141 "bdev_nvme_get_path_iostat", 00:05:45.141 "bdev_nvme_get_mdns_discovery_info", 00:05:45.141 "bdev_nvme_stop_mdns_discovery", 00:05:45.141 "bdev_nvme_start_mdns_discovery", 00:05:45.141 "bdev_nvme_set_multipath_policy", 00:05:45.141 "bdev_nvme_set_preferred_path", 00:05:45.141 "bdev_nvme_get_io_paths", 00:05:45.141 "bdev_nvme_remove_error_injection", 00:05:45.141 "bdev_nvme_add_error_injection", 00:05:45.141 "bdev_nvme_get_discovery_info", 00:05:45.141 "bdev_nvme_stop_discovery", 00:05:45.141 "bdev_nvme_start_discovery", 00:05:45.141 "bdev_nvme_get_controller_health_info", 00:05:45.141 "bdev_nvme_disable_controller", 00:05:45.141 "bdev_nvme_enable_controller", 00:05:45.141 "bdev_nvme_reset_controller", 00:05:45.141 "bdev_nvme_get_transport_statistics", 00:05:45.141 "bdev_nvme_apply_firmware", 00:05:45.141 "bdev_nvme_detach_controller", 00:05:45.141 "bdev_nvme_get_controllers", 00:05:45.141 "bdev_nvme_attach_controller", 00:05:45.141 "bdev_nvme_set_hotplug", 00:05:45.141 "bdev_nvme_set_options", 00:05:45.141 "bdev_passthru_delete", 00:05:45.141 "bdev_passthru_create", 00:05:45.141 "bdev_lvol_grow_lvstore", 00:05:45.141 "bdev_lvol_get_lvols", 00:05:45.141 "bdev_lvol_get_lvstores", 00:05:45.141 "bdev_lvol_delete", 00:05:45.141 "bdev_lvol_set_read_only", 00:05:45.141 "bdev_lvol_resize", 00:05:45.141 "bdev_lvol_decouple_parent", 00:05:45.141 "bdev_lvol_inflate", 00:05:45.141 "bdev_lvol_rename", 00:05:45.141 "bdev_lvol_clone_bdev", 00:05:45.141 "bdev_lvol_clone", 00:05:45.141 "bdev_lvol_snapshot", 00:05:45.141 "bdev_lvol_create", 00:05:45.141 "bdev_lvol_delete_lvstore", 00:05:45.141 "bdev_lvol_rename_lvstore", 00:05:45.141 "bdev_lvol_create_lvstore", 00:05:45.141 "bdev_raid_set_options", 00:05:45.141 "bdev_raid_remove_base_bdev", 00:05:45.141 "bdev_raid_add_base_bdev", 00:05:45.141 "bdev_raid_delete", 00:05:45.141 "bdev_raid_create", 00:05:45.141 "bdev_raid_get_bdevs", 00:05:45.141 "bdev_error_inject_error", 00:05:45.141 "bdev_error_delete", 00:05:45.141 "bdev_error_create", 00:05:45.141 "bdev_split_delete", 00:05:45.141 "bdev_split_create", 00:05:45.141 "bdev_delay_delete", 00:05:45.141 "bdev_delay_create", 00:05:45.141 "bdev_delay_update_latency", 00:05:45.141 "bdev_zone_block_delete", 00:05:45.141 "bdev_zone_block_create", 00:05:45.141 "blobfs_create", 00:05:45.141 "blobfs_detect", 00:05:45.141 "blobfs_set_cache_size", 00:05:45.141 "bdev_aio_delete", 00:05:45.141 "bdev_aio_rescan", 00:05:45.141 "bdev_aio_create", 00:05:45.141 "bdev_ftl_set_property", 00:05:45.141 "bdev_ftl_get_properties", 00:05:45.141 "bdev_ftl_get_stats", 00:05:45.141 "bdev_ftl_unmap", 00:05:45.141 "bdev_ftl_unload", 00:05:45.141 "bdev_ftl_delete", 00:05:45.141 "bdev_ftl_load", 00:05:45.141 "bdev_ftl_create", 00:05:45.141 "bdev_virtio_attach_controller", 00:05:45.141 "bdev_virtio_scsi_get_devices", 00:05:45.141 "bdev_virtio_detach_controller", 00:05:45.141 "bdev_virtio_blk_set_hotplug", 00:05:45.141 "bdev_iscsi_delete", 00:05:45.141 "bdev_iscsi_create", 00:05:45.141 "bdev_iscsi_set_options", 00:05:45.141 "bdev_uring_delete", 00:05:45.141 "bdev_uring_create", 00:05:45.141 "accel_error_inject_error", 00:05:45.141 "ioat_scan_accel_module", 00:05:45.141 "dsa_scan_accel_module", 00:05:45.141 "iaa_scan_accel_module", 00:05:45.141 "iscsi_set_options", 00:05:45.141 "iscsi_get_auth_groups", 00:05:45.141 "iscsi_auth_group_remove_secret", 00:05:45.141 "iscsi_auth_group_add_secret", 00:05:45.141 "iscsi_delete_auth_group", 00:05:45.141 "iscsi_create_auth_group", 00:05:45.141 "iscsi_set_discovery_auth", 00:05:45.141 "iscsi_get_options", 00:05:45.141 "iscsi_target_node_request_logout", 00:05:45.141 "iscsi_target_node_set_redirect", 00:05:45.141 "iscsi_target_node_set_auth", 00:05:45.141 "iscsi_target_node_add_lun", 00:05:45.141 "iscsi_get_connections", 00:05:45.141 "iscsi_portal_group_set_auth", 00:05:45.141 "iscsi_start_portal_group", 00:05:45.141 "iscsi_delete_portal_group", 00:05:45.141 "iscsi_create_portal_group", 00:05:45.141 "iscsi_get_portal_groups", 00:05:45.141 "iscsi_delete_target_node", 00:05:45.141 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.141 "iscsi_target_node_add_pg_ig_maps", 00:05:45.141 "iscsi_create_target_node", 00:05:45.141 "iscsi_get_target_nodes", 00:05:45.141 "iscsi_delete_initiator_group", 00:05:45.141 "iscsi_initiator_group_remove_initiators", 00:05:45.141 "iscsi_initiator_group_add_initiators", 00:05:45.141 "iscsi_create_initiator_group", 00:05:45.141 "iscsi_get_initiator_groups", 00:05:45.141 "nvmf_set_crdt", 00:05:45.141 "nvmf_set_config", 00:05:45.141 "nvmf_set_max_subsystems", 00:05:45.141 "nvmf_subsystem_get_listeners", 00:05:45.141 "nvmf_subsystem_get_qpairs", 00:05:45.141 "nvmf_subsystem_get_controllers", 00:05:45.141 "nvmf_get_stats", 00:05:45.141 "nvmf_get_transports", 00:05:45.141 "nvmf_create_transport", 00:05:45.141 "nvmf_get_targets", 00:05:45.141 "nvmf_delete_target", 00:05:45.141 "nvmf_create_target", 00:05:45.141 "nvmf_subsystem_allow_any_host", 00:05:45.141 "nvmf_subsystem_remove_host", 00:05:45.141 "nvmf_subsystem_add_host", 00:05:45.141 "nvmf_subsystem_remove_ns", 00:05:45.141 "nvmf_subsystem_add_ns", 00:05:45.141 "nvmf_subsystem_listener_set_ana_state", 00:05:45.141 "nvmf_discovery_get_referrals", 00:05:45.141 "nvmf_discovery_remove_referral", 00:05:45.141 "nvmf_discovery_add_referral", 00:05:45.141 "nvmf_subsystem_remove_listener", 00:05:45.141 "nvmf_subsystem_add_listener", 00:05:45.141 "nvmf_delete_subsystem", 00:05:45.141 "nvmf_create_subsystem", 00:05:45.141 "nvmf_get_subsystems", 00:05:45.141 "env_dpdk_get_mem_stats", 00:05:45.141 "nbd_get_disks", 00:05:45.141 "nbd_stop_disk", 00:05:45.141 "nbd_start_disk", 00:05:45.141 "ublk_recover_disk", 00:05:45.141 "ublk_get_disks", 00:05:45.141 "ublk_stop_disk", 00:05:45.141 "ublk_start_disk", 00:05:45.141 "ublk_destroy_target", 00:05:45.141 "ublk_create_target", 00:05:45.141 "virtio_blk_create_transport", 00:05:45.141 "virtio_blk_get_transports", 00:05:45.141 "vhost_controller_set_coalescing", 00:05:45.141 "vhost_get_controllers", 00:05:45.141 "vhost_delete_controller", 00:05:45.141 "vhost_create_blk_controller", 00:05:45.141 "vhost_scsi_controller_remove_target", 00:05:45.141 "vhost_scsi_controller_add_target", 00:05:45.141 "vhost_start_scsi_controller", 00:05:45.141 "vhost_create_scsi_controller", 00:05:45.141 "thread_set_cpumask", 00:05:45.141 "framework_get_scheduler", 00:05:45.141 "framework_set_scheduler", 00:05:45.141 "framework_get_reactors", 00:05:45.141 "thread_get_io_channels", 00:05:45.142 "thread_get_pollers", 00:05:45.142 "thread_get_stats", 00:05:45.142 "framework_monitor_context_switch", 00:05:45.142 "spdk_kill_instance", 00:05:45.142 "log_enable_timestamps", 00:05:45.142 "log_get_flags", 00:05:45.142 "log_clear_flag", 00:05:45.142 "log_set_flag", 00:05:45.142 "log_get_level", 00:05:45.142 "log_set_level", 00:05:45.142 "log_get_print_level", 00:05:45.142 "log_set_print_level", 00:05:45.142 "framework_enable_cpumask_locks", 00:05:45.142 "framework_disable_cpumask_locks", 00:05:45.142 "framework_wait_init", 00:05:45.142 "framework_start_init", 00:05:45.142 "scsi_get_devices", 00:05:45.142 "bdev_get_histogram", 00:05:45.142 "bdev_enable_histogram", 00:05:45.142 "bdev_set_qos_limit", 00:05:45.142 "bdev_set_qd_sampling_period", 00:05:45.142 "bdev_get_bdevs", 00:05:45.142 "bdev_reset_iostat", 00:05:45.142 "bdev_get_iostat", 00:05:45.142 "bdev_examine", 00:05:45.142 "bdev_wait_for_examine", 00:05:45.142 "bdev_set_options", 00:05:45.142 "notify_get_notifications", 00:05:45.142 "notify_get_types", 00:05:45.142 "accel_get_stats", 00:05:45.142 "accel_set_options", 00:05:45.142 "accel_set_driver", 00:05:45.142 "accel_crypto_key_destroy", 00:05:45.142 "accel_crypto_keys_get", 00:05:45.142 "accel_crypto_key_create", 00:05:45.142 "accel_assign_opc", 00:05:45.142 "accel_get_module_info", 00:05:45.142 "accel_get_opc_assignments", 00:05:45.142 "vmd_rescan", 00:05:45.142 "vmd_remove_device", 00:05:45.142 "vmd_enable", 00:05:45.142 "sock_set_default_impl", 00:05:45.142 "sock_impl_set_options", 00:05:45.142 "sock_impl_get_options", 00:05:45.142 "iobuf_get_stats", 00:05:45.142 "iobuf_set_options", 00:05:45.142 "framework_get_pci_devices", 00:05:45.142 "framework_get_config", 00:05:45.142 "framework_get_subsystems", 00:05:45.142 "trace_get_info", 00:05:45.142 "trace_get_tpoint_group_mask", 00:05:45.142 "trace_disable_tpoint_group", 00:05:45.142 "trace_enable_tpoint_group", 00:05:45.142 "trace_clear_tpoint_mask", 00:05:45.142 "trace_set_tpoint_mask", 00:05:45.142 "spdk_get_version", 00:05:45.142 "rpc_get_methods" 00:05:45.142 ] 00:05:45.142 11:01:56 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.142 11:01:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.142 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.142 11:01:56 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.142 11:01:56 -- spdkcli/tcp.sh@38 -- # killprocess 66602 00:05:45.142 11:01:56 -- common/autotest_common.sh@936 -- # '[' -z 66602 ']' 00:05:45.142 11:01:56 -- common/autotest_common.sh@940 -- # kill -0 66602 00:05:45.142 11:01:56 -- common/autotest_common.sh@941 -- # uname 00:05:45.142 11:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.142 11:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66602 00:05:45.142 killing process with pid 66602 00:05:45.142 11:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.142 11:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.142 11:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66602' 00:05:45.142 11:01:56 -- common/autotest_common.sh@955 -- # kill 66602 00:05:45.142 11:01:56 -- common/autotest_common.sh@960 -- # wait 66602 00:05:45.401 ************************************ 00:05:45.401 END TEST spdkcli_tcp 00:05:45.401 ************************************ 00:05:45.401 00:05:45.401 real 0m1.714s 00:05:45.401 user 0m3.333s 00:05:45.401 sys 0m0.346s 00:05:45.401 11:01:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.401 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.401 11:01:56 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.401 11:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.401 11:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.401 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.401 ************************************ 00:05:45.401 START TEST dpdk_mem_utility 00:05:45.401 ************************************ 00:05:45.401 11:01:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.401 * Looking for test storage... 00:05:45.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:45.401 11:01:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:45.401 11:01:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:45.401 11:01:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:45.659 11:01:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:45.659 11:01:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:45.659 11:01:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:45.659 11:01:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:45.660 11:01:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:45.660 11:01:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:45.660 11:01:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.660 11:01:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:45.660 11:01:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:45.660 11:01:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:45.660 11:01:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:45.660 11:01:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:45.660 11:01:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:45.660 11:01:56 -- scripts/common.sh@344 -- # : 1 00:05:45.660 11:01:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:45.660 11:01:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.660 11:01:56 -- scripts/common.sh@364 -- # decimal 1 00:05:45.660 11:01:56 -- scripts/common.sh@352 -- # local d=1 00:05:45.660 11:01:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.660 11:01:56 -- scripts/common.sh@354 -- # echo 1 00:05:45.660 11:01:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:45.660 11:01:56 -- scripts/common.sh@365 -- # decimal 2 00:05:45.660 11:01:56 -- scripts/common.sh@352 -- # local d=2 00:05:45.660 11:01:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.660 11:01:56 -- scripts/common.sh@354 -- # echo 2 00:05:45.660 11:01:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:45.660 11:01:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:45.660 11:01:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:45.660 11:01:56 -- scripts/common.sh@367 -- # return 0 00:05:45.660 11:01:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.660 11:01:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.660 --rc genhtml_branch_coverage=1 00:05:45.660 --rc genhtml_function_coverage=1 00:05:45.660 --rc genhtml_legend=1 00:05:45.660 --rc geninfo_all_blocks=1 00:05:45.660 --rc geninfo_unexecuted_blocks=1 00:05:45.660 00:05:45.660 ' 00:05:45.660 11:01:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.660 --rc genhtml_branch_coverage=1 00:05:45.660 --rc genhtml_function_coverage=1 00:05:45.660 --rc genhtml_legend=1 00:05:45.660 --rc geninfo_all_blocks=1 00:05:45.660 --rc geninfo_unexecuted_blocks=1 00:05:45.660 00:05:45.660 ' 00:05:45.660 11:01:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.660 --rc genhtml_branch_coverage=1 00:05:45.660 --rc genhtml_function_coverage=1 00:05:45.660 --rc genhtml_legend=1 00:05:45.660 --rc geninfo_all_blocks=1 00:05:45.660 --rc geninfo_unexecuted_blocks=1 00:05:45.660 00:05:45.660 ' 00:05:45.660 11:01:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.660 --rc genhtml_branch_coverage=1 00:05:45.660 --rc genhtml_function_coverage=1 00:05:45.660 --rc genhtml_legend=1 00:05:45.660 --rc geninfo_all_blocks=1 00:05:45.660 --rc geninfo_unexecuted_blocks=1 00:05:45.660 00:05:45.660 ' 00:05:45.660 11:01:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.660 11:01:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66700 00:05:45.660 11:01:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.660 11:01:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66700 00:05:45.660 11:01:56 -- common/autotest_common.sh@829 -- # '[' -z 66700 ']' 00:05:45.660 11:01:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.660 11:01:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.660 11:01:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.660 11:01:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.660 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 [2024-12-06 11:01:56.649695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.660 [2024-12-06 11:01:56.649781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66700 ] 00:05:45.660 [2024-12-06 11:01:56.782298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.919 [2024-12-06 11:01:56.814339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.919 [2024-12-06 11:01:56.814505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.484 11:01:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.484 11:01:57 -- common/autotest_common.sh@862 -- # return 0 00:05:46.484 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.484 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.484 11:01:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.484 11:01:57 -- common/autotest_common.sh@10 -- # set +x 00:05:46.484 { 00:05:46.484 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.484 } 00:05:46.484 11:01:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.484 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.745 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.745 1 heaps totaling size 814.000000 MiB 00:05:46.745 size: 814.000000 MiB heap id: 0 00:05:46.745 end heaps---------- 00:05:46.745 8 mempools totaling size 598.116089 MiB 00:05:46.745 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.745 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.745 size: 84.521057 MiB name: bdev_io_66700 00:05:46.745 size: 51.011292 MiB name: evtpool_66700 00:05:46.745 size: 50.003479 MiB name: msgpool_66700 00:05:46.745 size: 21.763794 MiB name: PDU_Pool 00:05:46.745 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.745 size: 0.026123 MiB name: Session_Pool 00:05:46.745 end mempools------- 00:05:46.745 6 memzones totaling size 4.142822 MiB 00:05:46.745 size: 1.000366 MiB name: RG_ring_0_66700 00:05:46.745 size: 1.000366 MiB name: RG_ring_1_66700 00:05:46.745 size: 1.000366 MiB name: RG_ring_4_66700 00:05:46.745 size: 1.000366 MiB name: RG_ring_5_66700 00:05:46.745 size: 0.125366 MiB name: RG_ring_2_66700 00:05:46.745 size: 0.015991 MiB name: RG_ring_3_66700 00:05:46.745 end memzones------- 00:05:46.745 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.745 heap id: 0 total size: 814.000000 MiB number of busy elements: 308 number of free elements: 15 00:05:46.745 list of free elements. size: 12.470459 MiB 00:05:46.745 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.745 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.745 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.745 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.745 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.745 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.745 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.745 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.745 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:46.745 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:05:46.745 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:46.745 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:46.745 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.745 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:46.745 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:46.745 list of standard malloc elements. size: 199.266968 MiB 00:05:46.745 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.745 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.745 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.745 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.745 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.745 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.745 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.745 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.745 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.745 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:46.745 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:46.746 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.747 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.747 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.747 list of memzone associated elements. size: 602.262573 MiB 00:05:46.747 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.747 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.747 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.747 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.747 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.747 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66700_0 00:05:46.747 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.747 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66700_0 00:05:46.747 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.747 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66700_0 00:05:46.747 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.747 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.747 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.747 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.747 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.747 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66700 00:05:46.747 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.747 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66700 00:05:46.747 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.747 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66700 00:05:46.747 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.747 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.747 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.747 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.747 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.747 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.747 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66700 00:05:46.747 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.747 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66700 00:05:46.747 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.747 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66700 00:05:46.747 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.747 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66700 00:05:46.747 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.747 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66700 00:05:46.747 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.747 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.747 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.747 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.747 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.747 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.747 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.747 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66700 00:05:46.747 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.747 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.747 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:46.748 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.748 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.748 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66700 00:05:46.748 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:46.748 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.748 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:46.748 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66700 00:05:46.748 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.748 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66700 00:05:46.748 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:46.748 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.748 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.748 11:01:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66700 00:05:46.748 11:01:57 -- common/autotest_common.sh@936 -- # '[' -z 66700 ']' 00:05:46.748 11:01:57 -- common/autotest_common.sh@940 -- # kill -0 66700 00:05:46.748 11:01:57 -- common/autotest_common.sh@941 -- # uname 00:05:46.748 11:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.748 11:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66700 00:05:46.748 killing process with pid 66700 00:05:46.748 11:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.748 11:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.748 11:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66700' 00:05:46.748 11:01:57 -- common/autotest_common.sh@955 -- # kill 66700 00:05:46.748 11:01:57 -- common/autotest_common.sh@960 -- # wait 66700 00:05:47.007 00:05:47.007 real 0m1.623s 00:05:47.007 user 0m1.867s 00:05:47.007 sys 0m0.323s 00:05:47.007 ************************************ 00:05:47.007 END TEST dpdk_mem_utility 00:05:47.007 ************************************ 00:05:47.007 11:01:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.007 11:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 11:01:58 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:47.007 11:01:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.007 11:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.007 11:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 START TEST event 00:05:47.007 ************************************ 00:05:47.007 11:01:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:47.266 * Looking for test storage... 00:05:47.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:47.266 11:01:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.266 11:01:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.266 11:01:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.266 11:01:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.266 11:01:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.266 11:01:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.266 11:01:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.266 11:01:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.266 11:01:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.266 11:01:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.266 11:01:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.266 11:01:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.266 11:01:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.266 11:01:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.266 11:01:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.266 11:01:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.266 11:01:58 -- scripts/common.sh@344 -- # : 1 00:05:47.266 11:01:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.266 11:01:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.266 11:01:58 -- scripts/common.sh@364 -- # decimal 1 00:05:47.266 11:01:58 -- scripts/common.sh@352 -- # local d=1 00:05:47.266 11:01:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.266 11:01:58 -- scripts/common.sh@354 -- # echo 1 00:05:47.266 11:01:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.266 11:01:58 -- scripts/common.sh@365 -- # decimal 2 00:05:47.266 11:01:58 -- scripts/common.sh@352 -- # local d=2 00:05:47.266 11:01:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.266 11:01:58 -- scripts/common.sh@354 -- # echo 2 00:05:47.266 11:01:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.266 11:01:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.266 11:01:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.266 11:01:58 -- scripts/common.sh@367 -- # return 0 00:05:47.266 11:01:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.266 11:01:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.266 --rc genhtml_branch_coverage=1 00:05:47.266 --rc genhtml_function_coverage=1 00:05:47.266 --rc genhtml_legend=1 00:05:47.266 --rc geninfo_all_blocks=1 00:05:47.266 --rc geninfo_unexecuted_blocks=1 00:05:47.266 00:05:47.266 ' 00:05:47.266 11:01:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.266 --rc genhtml_branch_coverage=1 00:05:47.266 --rc genhtml_function_coverage=1 00:05:47.266 --rc genhtml_legend=1 00:05:47.266 --rc geninfo_all_blocks=1 00:05:47.266 --rc geninfo_unexecuted_blocks=1 00:05:47.266 00:05:47.266 ' 00:05:47.266 11:01:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.266 --rc genhtml_branch_coverage=1 00:05:47.266 --rc genhtml_function_coverage=1 00:05:47.266 --rc genhtml_legend=1 00:05:47.266 --rc geninfo_all_blocks=1 00:05:47.266 --rc geninfo_unexecuted_blocks=1 00:05:47.266 00:05:47.266 ' 00:05:47.266 11:01:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.266 --rc genhtml_branch_coverage=1 00:05:47.266 --rc genhtml_function_coverage=1 00:05:47.266 --rc genhtml_legend=1 00:05:47.266 --rc geninfo_all_blocks=1 00:05:47.266 --rc geninfo_unexecuted_blocks=1 00:05:47.266 00:05:47.266 ' 00:05:47.266 11:01:58 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:47.266 11:01:58 -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.267 11:01:58 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.267 11:01:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:47.267 11:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.267 11:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:47.267 ************************************ 00:05:47.267 START TEST event_perf 00:05:47.267 ************************************ 00:05:47.267 11:01:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.267 Running I/O for 1 seconds...[2024-12-06 11:01:58.301649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.267 [2024-12-06 11:01:58.301743] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66783 ] 00:05:47.526 [2024-12-06 11:01:58.437880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.526 [2024-12-06 11:01:58.470239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.526 [2024-12-06 11:01:58.470366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.526 [2024-12-06 11:01:58.470488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.526 Running I/O for 1 seconds...[2024-12-06 11:01:58.470488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.461 00:05:48.461 lcore 0: 205395 00:05:48.461 lcore 1: 205395 00:05:48.461 lcore 2: 205396 00:05:48.461 lcore 3: 205396 00:05:48.461 done. 00:05:48.461 00:05:48.461 real 0m1.234s 00:05:48.461 user 0m4.067s 00:05:48.461 sys 0m0.047s 00:05:48.461 11:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.461 ************************************ 00:05:48.461 END TEST event_perf 00:05:48.461 ************************************ 00:05:48.461 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:48.461 11:01:59 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.461 11:01:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:48.461 11:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.461 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:48.461 ************************************ 00:05:48.461 START TEST event_reactor 00:05:48.461 ************************************ 00:05:48.461 11:01:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:48.461 [2024-12-06 11:01:59.588249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.461 [2024-12-06 11:01:59.588346] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66817 ] 00:05:48.721 [2024-12-06 11:01:59.724220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.721 [2024-12-06 11:01:59.754533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.657 test_start 00:05:49.657 oneshot 00:05:49.657 tick 100 00:05:49.657 tick 100 00:05:49.657 tick 250 00:05:49.657 tick 100 00:05:49.657 tick 100 00:05:49.657 tick 100 00:05:49.657 tick 250 00:05:49.657 tick 500 00:05:49.657 tick 100 00:05:49.657 tick 100 00:05:49.657 tick 250 00:05:49.657 tick 100 00:05:49.657 tick 100 00:05:49.657 test_end 00:05:49.657 00:05:49.657 real 0m1.228s 00:05:49.657 user 0m1.082s 00:05:49.657 sys 0m0.042s 00:05:49.657 11:02:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.657 ************************************ 00:05:49.657 END TEST event_reactor 00:05:49.658 ************************************ 00:05:49.658 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:05:49.917 11:02:00 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.917 11:02:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:49.917 11:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.917 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:05:49.917 ************************************ 00:05:49.917 START TEST event_reactor_perf 00:05:49.917 ************************************ 00:05:49.917 11:02:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.917 [2024-12-06 11:02:00.872870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.917 [2024-12-06 11:02:00.872978] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66847 ] 00:05:49.917 [2024-12-06 11:02:01.008115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.917 [2024-12-06 11:02:01.037495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.293 test_start 00:05:51.293 test_end 00:05:51.293 Performance: 444928 events per second 00:05:51.293 00:05:51.293 real 0m1.245s 00:05:51.293 user 0m1.104s 00:05:51.293 sys 0m0.035s 00:05:51.293 ************************************ 00:05:51.293 END TEST event_reactor_perf 00:05:51.293 ************************************ 00:05:51.293 11:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.293 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.293 11:02:02 -- event/event.sh@49 -- # uname -s 00:05:51.293 11:02:02 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.293 11:02:02 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.293 11:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.293 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.293 ************************************ 00:05:51.293 START TEST event_scheduler 00:05:51.293 ************************************ 00:05:51.293 11:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.293 * Looking for test storage... 00:05:51.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:51.293 11:02:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.293 11:02:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.293 11:02:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.293 11:02:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.293 11:02:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.293 11:02:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.293 11:02:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.293 11:02:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.293 11:02:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.293 11:02:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.293 11:02:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.293 11:02:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.293 11:02:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.293 11:02:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.293 11:02:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.293 11:02:02 -- scripts/common.sh@344 -- # : 1 00:05:51.293 11:02:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.293 11:02:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.293 11:02:02 -- scripts/common.sh@364 -- # decimal 1 00:05:51.293 11:02:02 -- scripts/common.sh@352 -- # local d=1 00:05:51.293 11:02:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.293 11:02:02 -- scripts/common.sh@354 -- # echo 1 00:05:51.293 11:02:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.293 11:02:02 -- scripts/common.sh@365 -- # decimal 2 00:05:51.293 11:02:02 -- scripts/common.sh@352 -- # local d=2 00:05:51.293 11:02:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.293 11:02:02 -- scripts/common.sh@354 -- # echo 2 00:05:51.293 11:02:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.293 11:02:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.293 11:02:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.293 11:02:02 -- scripts/common.sh@367 -- # return 0 00:05:51.293 11:02:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.293 --rc genhtml_branch_coverage=1 00:05:51.293 --rc genhtml_function_coverage=1 00:05:51.293 --rc genhtml_legend=1 00:05:51.293 --rc geninfo_all_blocks=1 00:05:51.293 --rc geninfo_unexecuted_blocks=1 00:05:51.293 00:05:51.293 ' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.293 --rc genhtml_branch_coverage=1 00:05:51.293 --rc genhtml_function_coverage=1 00:05:51.293 --rc genhtml_legend=1 00:05:51.293 --rc geninfo_all_blocks=1 00:05:51.293 --rc geninfo_unexecuted_blocks=1 00:05:51.293 00:05:51.293 ' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.293 --rc genhtml_branch_coverage=1 00:05:51.293 --rc genhtml_function_coverage=1 00:05:51.293 --rc genhtml_legend=1 00:05:51.293 --rc geninfo_all_blocks=1 00:05:51.293 --rc geninfo_unexecuted_blocks=1 00:05:51.293 00:05:51.293 ' 00:05:51.293 11:02:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.293 --rc genhtml_branch_coverage=1 00:05:51.293 --rc genhtml_function_coverage=1 00:05:51.293 --rc genhtml_legend=1 00:05:51.293 --rc geninfo_all_blocks=1 00:05:51.293 --rc geninfo_unexecuted_blocks=1 00:05:51.293 00:05:51.293 ' 00:05:51.293 11:02:02 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.293 11:02:02 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66916 00:05:51.293 11:02:02 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.293 11:02:02 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.293 11:02:02 -- scheduler/scheduler.sh@37 -- # waitforlisten 66916 00:05:51.293 11:02:02 -- common/autotest_common.sh@829 -- # '[' -z 66916 ']' 00:05:51.293 11:02:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.293 11:02:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.293 11:02:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.293 11:02:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.293 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.293 [2024-12-06 11:02:02.391520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.294 [2024-12-06 11:02:02.391657] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66916 ] 00:05:51.553 [2024-12-06 11:02:02.530318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.553 [2024-12-06 11:02:02.563889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.553 [2024-12-06 11:02:02.564053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.553 [2024-12-06 11:02:02.564090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.553 [2024-12-06 11:02:02.564091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.553 11:02:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.553 11:02:02 -- common/autotest_common.sh@862 -- # return 0 00:05:51.553 11:02:02 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.553 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.553 POWER: Env isn't set yet! 00:05:51.553 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:51.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.553 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.553 POWER: Attempting to initialise PSTAT power management... 00:05:51.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.553 POWER: Cannot set governor of lcore 0 to performance 00:05:51.553 POWER: Attempting to initialise AMD PSTATE power management... 00:05:51.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.553 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.553 POWER: Attempting to initialise CPPC power management... 00:05:51.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.553 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.553 POWER: Attempting to initialise VM power management... 00:05:51.553 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:51.553 POWER: Unable to set Power Management Environment for lcore 0 00:05:51.553 [2024-12-06 11:02:02.661425] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:51.553 [2024-12-06 11:02:02.661437] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:51.553 [2024-12-06 11:02:02.661445] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.553 [2024-12-06 11:02:02.661457] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.553 [2024-12-06 11:02:02.661464] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.553 [2024-12-06 11:02:02.661470] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.553 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.553 11:02:02 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.553 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.553 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 [2024-12-06 11:02:02.712347] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.812 11:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.812 11:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 ************************************ 00:05:51.812 START TEST scheduler_create_thread 00:05:51.812 ************************************ 00:05:51.812 11:02:02 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 2 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 3 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 4 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 5 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 6 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 7 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 8 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 9 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.812 11:02:02 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.812 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.812 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 10 00:05:51.812 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.813 11:02:02 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.813 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.813 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.813 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.813 11:02:02 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.813 11:02:02 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.813 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.813 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.813 11:02:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.813 11:02:02 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.813 11:02:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.813 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.222 11:02:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.222 11:02:04 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.222 11:02:04 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.222 11:02:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.222 11:02:04 -- common/autotest_common.sh@10 -- # set +x 00:05:54.592 11:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.592 00:05:54.592 real 0m2.608s 00:05:54.592 user 0m0.017s 00:05:54.592 sys 0m0.006s 00:05:54.592 ************************************ 00:05:54.592 END TEST scheduler_create_thread 00:05:54.592 ************************************ 00:05:54.592 11:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.592 11:02:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.592 11:02:05 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.592 11:02:05 -- scheduler/scheduler.sh@46 -- # killprocess 66916 00:05:54.592 11:02:05 -- common/autotest_common.sh@936 -- # '[' -z 66916 ']' 00:05:54.592 11:02:05 -- common/autotest_common.sh@940 -- # kill -0 66916 00:05:54.592 11:02:05 -- common/autotest_common.sh@941 -- # uname 00:05:54.592 11:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.592 11:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66916 00:05:54.592 11:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:54.592 killing process with pid 66916 00:05:54.592 11:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:54.592 11:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66916' 00:05:54.592 11:02:05 -- common/autotest_common.sh@955 -- # kill 66916 00:05:54.593 11:02:05 -- common/autotest_common.sh@960 -- # wait 66916 00:05:54.851 [2024-12-06 11:02:05.814130] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.851 00:05:54.851 real 0m3.803s 00:05:54.851 user 0m5.728s 00:05:54.851 sys 0m0.295s 00:05:54.851 11:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.851 11:02:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.851 ************************************ 00:05:54.851 END TEST event_scheduler 00:05:54.851 ************************************ 00:05:55.110 11:02:06 -- event/event.sh@51 -- # modprobe -n nbd 00:05:55.110 11:02:06 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:55.110 11:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.110 11:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.110 11:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.110 ************************************ 00:05:55.110 START TEST app_repeat 00:05:55.110 ************************************ 00:05:55.110 11:02:06 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:55.110 11:02:06 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.110 11:02:06 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.110 11:02:06 -- event/event.sh@13 -- # local nbd_list 00:05:55.110 11:02:06 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.110 11:02:06 -- event/event.sh@14 -- # local bdev_list 00:05:55.110 11:02:06 -- event/event.sh@15 -- # local repeat_times=4 00:05:55.110 11:02:06 -- event/event.sh@17 -- # modprobe nbd 00:05:55.110 11:02:06 -- event/event.sh@19 -- # repeat_pid=67002 00:05:55.110 Process app_repeat pid: 67002 00:05:55.110 11:02:06 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.110 11:02:06 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:55.110 11:02:06 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 67002' 00:05:55.110 11:02:06 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.110 spdk_app_start Round 0 00:05:55.110 11:02:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:55.110 11:02:06 -- event/event.sh@25 -- # waitforlisten 67002 /var/tmp/spdk-nbd.sock 00:05:55.110 11:02:06 -- common/autotest_common.sh@829 -- # '[' -z 67002 ']' 00:05:55.110 11:02:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.110 11:02:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.110 11:02:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.110 11:02:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.110 11:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.110 [2024-12-06 11:02:06.048190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.110 [2024-12-06 11:02:06.048304] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67002 ] 00:05:55.110 [2024-12-06 11:02:06.187114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.110 [2024-12-06 11:02:06.226738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.110 [2024-12-06 11:02:06.226751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.367 11:02:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.367 11:02:06 -- common/autotest_common.sh@862 -- # return 0 00:05:55.367 11:02:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.626 Malloc0 00:05:55.626 11:02:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.884 Malloc1 00:05:55.884 11:02:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.884 11:02:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.885 11:02:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.143 /dev/nbd0 00:05:56.143 11:02:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.143 11:02:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.143 11:02:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:56.143 11:02:07 -- common/autotest_common.sh@867 -- # local i 00:05:56.143 11:02:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.143 11:02:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.143 11:02:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:56.143 11:02:07 -- common/autotest_common.sh@871 -- # break 00:05:56.143 11:02:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.143 11:02:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.143 11:02:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.143 1+0 records in 00:05:56.143 1+0 records out 00:05:56.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281377 s, 14.6 MB/s 00:05:56.143 11:02:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.143 11:02:07 -- common/autotest_common.sh@884 -- # size=4096 00:05:56.143 11:02:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.143 11:02:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.143 11:02:07 -- common/autotest_common.sh@887 -- # return 0 00:05:56.143 11:02:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.143 11:02:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.143 11:02:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.403 /dev/nbd1 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.403 11:02:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:56.403 11:02:07 -- common/autotest_common.sh@867 -- # local i 00:05:56.403 11:02:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:56.403 11:02:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:56.403 11:02:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:56.403 11:02:07 -- common/autotest_common.sh@871 -- # break 00:05:56.403 11:02:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:56.403 11:02:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:56.403 11:02:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.403 1+0 records in 00:05:56.403 1+0 records out 00:05:56.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209012 s, 19.6 MB/s 00:05:56.403 11:02:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.403 11:02:07 -- common/autotest_common.sh@884 -- # size=4096 00:05:56.403 11:02:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.403 11:02:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:56.403 11:02:07 -- common/autotest_common.sh@887 -- # return 0 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.403 11:02:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.662 { 00:05:56.662 "nbd_device": "/dev/nbd0", 00:05:56.662 "bdev_name": "Malloc0" 00:05:56.662 }, 00:05:56.662 { 00:05:56.662 "nbd_device": "/dev/nbd1", 00:05:56.662 "bdev_name": "Malloc1" 00:05:56.662 } 00:05:56.662 ]' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.662 { 00:05:56.662 "nbd_device": "/dev/nbd0", 00:05:56.662 "bdev_name": "Malloc0" 00:05:56.662 }, 00:05:56.662 { 00:05:56.662 "nbd_device": "/dev/nbd1", 00:05:56.662 "bdev_name": "Malloc1" 00:05:56.662 } 00:05:56.662 ]' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.662 /dev/nbd1' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.662 /dev/nbd1' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.662 256+0 records in 00:05:56.662 256+0 records out 00:05:56.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109052 s, 96.2 MB/s 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.662 256+0 records in 00:05:56.662 256+0 records out 00:05:56.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238076 s, 44.0 MB/s 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.662 256+0 records in 00:05:56.662 256+0 records out 00:05:56.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027157 s, 38.6 MB/s 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.662 11:02:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.921 11:02:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@41 -- # break 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.921 11:02:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@41 -- # break 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.179 11:02:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@65 -- # true 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.437 11:02:08 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.437 11:02:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.007 11:02:08 -- event/event.sh@35 -- # sleep 3 00:05:58.007 [2024-12-06 11:02:08.947068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.007 [2024-12-06 11:02:08.977417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.007 [2024-12-06 11:02:08.977427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.007 [2024-12-06 11:02:09.007754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.007 [2024-12-06 11:02:09.007822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.292 11:02:11 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.292 spdk_app_start Round 1 00:06:01.292 11:02:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.292 11:02:11 -- event/event.sh@25 -- # waitforlisten 67002 /var/tmp/spdk-nbd.sock 00:06:01.292 11:02:11 -- common/autotest_common.sh@829 -- # '[' -z 67002 ']' 00:06:01.292 11:02:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.292 11:02:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.293 11:02:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.293 11:02:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.293 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:01.293 11:02:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.293 11:02:12 -- common/autotest_common.sh@862 -- # return 0 00:06:01.293 11:02:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.293 Malloc0 00:06:01.293 11:02:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.551 Malloc1 00:06:01.551 11:02:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@12 -- # local i 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.551 11:02:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.810 /dev/nbd0 00:06:01.810 11:02:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.810 11:02:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.810 11:02:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.810 11:02:12 -- common/autotest_common.sh@867 -- # local i 00:06:01.810 11:02:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.810 11:02:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.810 11:02:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.810 11:02:12 -- common/autotest_common.sh@871 -- # break 00:06:01.810 11:02:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.810 11:02:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.810 11:02:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.810 1+0 records in 00:06:01.810 1+0 records out 00:06:01.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172059 s, 23.8 MB/s 00:06:01.810 11:02:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.810 11:02:12 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.810 11:02:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.810 11:02:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.810 11:02:12 -- common/autotest_common.sh@887 -- # return 0 00:06:01.810 11:02:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.810 11:02:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.810 11:02:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.069 /dev/nbd1 00:06:02.069 11:02:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.069 11:02:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.069 11:02:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:02.069 11:02:13 -- common/autotest_common.sh@867 -- # local i 00:06:02.070 11:02:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.070 11:02:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.070 11:02:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:02.070 11:02:13 -- common/autotest_common.sh@871 -- # break 00:06:02.070 11:02:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.070 11:02:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.070 11:02:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.070 1+0 records in 00:06:02.070 1+0 records out 00:06:02.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170664 s, 24.0 MB/s 00:06:02.070 11:02:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.070 11:02:13 -- common/autotest_common.sh@884 -- # size=4096 00:06:02.070 11:02:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.070 11:02:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.070 11:02:13 -- common/autotest_common.sh@887 -- # return 0 00:06:02.070 11:02:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.070 11:02:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.070 11:02:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.070 11:02:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.070 11:02:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.329 11:02:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.329 { 00:06:02.329 "nbd_device": "/dev/nbd0", 00:06:02.329 "bdev_name": "Malloc0" 00:06:02.329 }, 00:06:02.329 { 00:06:02.329 "nbd_device": "/dev/nbd1", 00:06:02.329 "bdev_name": "Malloc1" 00:06:02.329 } 00:06:02.329 ]' 00:06:02.329 11:02:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.329 { 00:06:02.329 "nbd_device": "/dev/nbd0", 00:06:02.329 "bdev_name": "Malloc0" 00:06:02.329 }, 00:06:02.329 { 00:06:02.329 "nbd_device": "/dev/nbd1", 00:06:02.329 "bdev_name": "Malloc1" 00:06:02.329 } 00:06:02.329 ]' 00:06:02.329 11:02:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.590 /dev/nbd1' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.590 /dev/nbd1' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.590 256+0 records in 00:06:02.590 256+0 records out 00:06:02.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585833 s, 179 MB/s 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.590 256+0 records in 00:06:02.590 256+0 records out 00:06:02.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240666 s, 43.6 MB/s 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.590 256+0 records in 00:06:02.590 256+0 records out 00:06:02.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215098 s, 48.7 MB/s 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.590 11:02:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@41 -- # break 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.865 11:02:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@41 -- # break 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.151 11:02:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@65 -- # true 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.421 11:02:14 -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.421 11:02:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.678 11:02:14 -- event/event.sh@35 -- # sleep 3 00:06:03.678 [2024-12-06 11:02:14.762194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.678 [2024-12-06 11:02:14.794371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.678 [2024-12-06 11:02:14.794377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.678 [2024-12-06 11:02:14.822075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.678 [2024-12-06 11:02:14.822146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.959 11:02:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:06.959 spdk_app_start Round 2 00:06:06.959 11:02:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.959 11:02:17 -- event/event.sh@25 -- # waitforlisten 67002 /var/tmp/spdk-nbd.sock 00:06:06.959 11:02:17 -- common/autotest_common.sh@829 -- # '[' -z 67002 ']' 00:06:06.959 11:02:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.959 11:02:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.959 11:02:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.959 11:02:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.959 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 11:02:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.959 11:02:17 -- common/autotest_common.sh@862 -- # return 0 00:06:06.959 11:02:17 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.217 Malloc0 00:06:07.217 11:02:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.474 Malloc1 00:06:07.474 11:02:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.474 11:02:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.732 /dev/nbd0 00:06:07.732 11:02:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.732 11:02:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.732 11:02:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.732 11:02:18 -- common/autotest_common.sh@867 -- # local i 00:06:07.732 11:02:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.732 11:02:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.732 11:02:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.732 11:02:18 -- common/autotest_common.sh@871 -- # break 00:06:07.732 11:02:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.732 11:02:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.732 11:02:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.732 1+0 records in 00:06:07.732 1+0 records out 00:06:07.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170592 s, 24.0 MB/s 00:06:07.732 11:02:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.732 11:02:18 -- common/autotest_common.sh@884 -- # size=4096 00:06:07.732 11:02:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.732 11:02:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.732 11:02:18 -- common/autotest_common.sh@887 -- # return 0 00:06:07.732 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.732 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.732 11:02:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.991 /dev/nbd1 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.991 11:02:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:07.991 11:02:18 -- common/autotest_common.sh@867 -- # local i 00:06:07.991 11:02:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.991 11:02:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.991 11:02:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:07.991 11:02:18 -- common/autotest_common.sh@871 -- # break 00:06:07.991 11:02:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.991 11:02:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.991 11:02:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.991 1+0 records in 00:06:07.991 1+0 records out 00:06:07.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213843 s, 19.2 MB/s 00:06:07.991 11:02:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.991 11:02:18 -- common/autotest_common.sh@884 -- # size=4096 00:06:07.991 11:02:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.991 11:02:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.991 11:02:18 -- common/autotest_common.sh@887 -- # return 0 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.991 11:02:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.250 { 00:06:08.250 "nbd_device": "/dev/nbd0", 00:06:08.250 "bdev_name": "Malloc0" 00:06:08.250 }, 00:06:08.250 { 00:06:08.250 "nbd_device": "/dev/nbd1", 00:06:08.250 "bdev_name": "Malloc1" 00:06:08.250 } 00:06:08.250 ]' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.250 { 00:06:08.250 "nbd_device": "/dev/nbd0", 00:06:08.250 "bdev_name": "Malloc0" 00:06:08.250 }, 00:06:08.250 { 00:06:08.250 "nbd_device": "/dev/nbd1", 00:06:08.250 "bdev_name": "Malloc1" 00:06:08.250 } 00:06:08.250 ]' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.250 /dev/nbd1' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.250 /dev/nbd1' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.250 11:02:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.250 256+0 records in 00:06:08.251 256+0 records out 00:06:08.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106437 s, 98.5 MB/s 00:06:08.251 11:02:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.251 11:02:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.251 256+0 records in 00:06:08.251 256+0 records out 00:06:08.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243584 s, 43.0 MB/s 00:06:08.251 11:02:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.251 11:02:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.251 256+0 records in 00:06:08.251 256+0 records out 00:06:08.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022721 s, 46.2 MB/s 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.509 11:02:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@41 -- # break 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.768 11:02:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.769 11:02:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@41 -- # break 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.027 11:02:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.285 11:02:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.285 11:02:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.285 11:02:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.285 11:02:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.285 11:02:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@65 -- # true 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.286 11:02:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.286 11:02:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.543 11:02:20 -- event/event.sh@35 -- # sleep 3 00:06:09.543 [2024-12-06 11:02:20.638338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.543 [2024-12-06 11:02:20.671513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.543 [2024-12-06 11:02:20.671522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.800 [2024-12-06 11:02:20.700102] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.800 [2024-12-06 11:02:20.700175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.085 11:02:23 -- event/event.sh@38 -- # waitforlisten 67002 /var/tmp/spdk-nbd.sock 00:06:13.085 11:02:23 -- common/autotest_common.sh@829 -- # '[' -z 67002 ']' 00:06:13.085 11:02:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.085 11:02:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.085 11:02:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.085 11:02:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.085 11:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:13.085 11:02:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.085 11:02:23 -- common/autotest_common.sh@862 -- # return 0 00:06:13.085 11:02:23 -- event/event.sh@39 -- # killprocess 67002 00:06:13.085 11:02:23 -- common/autotest_common.sh@936 -- # '[' -z 67002 ']' 00:06:13.085 11:02:23 -- common/autotest_common.sh@940 -- # kill -0 67002 00:06:13.085 11:02:23 -- common/autotest_common.sh@941 -- # uname 00:06:13.085 11:02:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.085 11:02:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67002 00:06:13.085 killing process with pid 67002 00:06:13.085 11:02:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.086 11:02:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.086 11:02:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67002' 00:06:13.086 11:02:23 -- common/autotest_common.sh@955 -- # kill 67002 00:06:13.086 11:02:23 -- common/autotest_common.sh@960 -- # wait 67002 00:06:13.086 spdk_app_start is called in Round 0. 00:06:13.086 Shutdown signal received, stop current app iteration 00:06:13.086 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:13.086 spdk_app_start is called in Round 1. 00:06:13.086 Shutdown signal received, stop current app iteration 00:06:13.086 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:13.086 spdk_app_start is called in Round 2. 00:06:13.086 Shutdown signal received, stop current app iteration 00:06:13.086 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:13.086 spdk_app_start is called in Round 3. 00:06:13.086 Shutdown signal received, stop current app iteration 00:06:13.086 11:02:23 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.086 11:02:23 -- event/event.sh@42 -- # return 0 00:06:13.086 00:06:13.086 real 0m17.944s 00:06:13.086 user 0m40.970s 00:06:13.086 sys 0m2.429s 00:06:13.086 11:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.086 ************************************ 00:06:13.086 END TEST app_repeat 00:06:13.086 ************************************ 00:06:13.086 11:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:13.086 11:02:24 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.086 11:02:24 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.086 11:02:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.086 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:13.086 ************************************ 00:06:13.086 START TEST cpu_locks 00:06:13.086 ************************************ 00:06:13.086 11:02:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.086 * Looking for test storage... 00:06:13.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:13.086 11:02:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.086 11:02:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.086 11:02:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.086 11:02:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.086 11:02:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.086 11:02:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.086 11:02:24 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.086 11:02:24 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.086 11:02:24 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.086 11:02:24 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.086 11:02:24 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.086 11:02:24 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.086 11:02:24 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.086 11:02:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.086 11:02:24 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.086 11:02:24 -- scripts/common.sh@344 -- # : 1 00:06:13.086 11:02:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.086 11:02:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.086 11:02:24 -- scripts/common.sh@364 -- # decimal 1 00:06:13.086 11:02:24 -- scripts/common.sh@352 -- # local d=1 00:06:13.086 11:02:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.086 11:02:24 -- scripts/common.sh@354 -- # echo 1 00:06:13.086 11:02:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.086 11:02:24 -- scripts/common.sh@365 -- # decimal 2 00:06:13.086 11:02:24 -- scripts/common.sh@352 -- # local d=2 00:06:13.086 11:02:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.086 11:02:24 -- scripts/common.sh@354 -- # echo 2 00:06:13.086 11:02:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.086 11:02:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.086 11:02:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.086 11:02:24 -- scripts/common.sh@367 -- # return 0 00:06:13.086 11:02:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 11:02:24 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.086 11:02:24 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.086 11:02:24 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.086 11:02:24 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.086 11:02:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.086 11:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.086 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:13.086 ************************************ 00:06:13.086 START TEST default_locks 00:06:13.086 ************************************ 00:06:13.086 11:02:24 -- common/autotest_common.sh@1114 -- # default_locks 00:06:13.086 11:02:24 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67429 00:06:13.086 11:02:24 -- event/cpu_locks.sh@47 -- # waitforlisten 67429 00:06:13.086 11:02:24 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.086 11:02:24 -- common/autotest_common.sh@829 -- # '[' -z 67429 ']' 00:06:13.086 11:02:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.086 11:02:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.086 11:02:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.086 11:02:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.086 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:13.345 [2024-12-06 11:02:24.254574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.345 [2024-12-06 11:02:24.254696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67429 ] 00:06:13.345 [2024-12-06 11:02:24.389451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.345 [2024-12-06 11:02:24.420796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.345 [2024-12-06 11:02:24.420976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.280 11:02:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.280 11:02:25 -- common/autotest_common.sh@862 -- # return 0 00:06:14.280 11:02:25 -- event/cpu_locks.sh@49 -- # locks_exist 67429 00:06:14.280 11:02:25 -- event/cpu_locks.sh@22 -- # lslocks -p 67429 00:06:14.280 11:02:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.538 11:02:25 -- event/cpu_locks.sh@50 -- # killprocess 67429 00:06:14.538 11:02:25 -- common/autotest_common.sh@936 -- # '[' -z 67429 ']' 00:06:14.538 11:02:25 -- common/autotest_common.sh@940 -- # kill -0 67429 00:06:14.538 11:02:25 -- common/autotest_common.sh@941 -- # uname 00:06:14.538 11:02:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.538 11:02:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67429 00:06:14.538 11:02:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.538 11:02:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.538 killing process with pid 67429 00:06:14.538 11:02:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67429' 00:06:14.538 11:02:25 -- common/autotest_common.sh@955 -- # kill 67429 00:06:14.538 11:02:25 -- common/autotest_common.sh@960 -- # wait 67429 00:06:14.796 11:02:25 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67429 00:06:14.796 11:02:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.796 11:02:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67429 00:06:14.796 11:02:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.796 11:02:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.796 11:02:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.796 11:02:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.796 11:02:25 -- common/autotest_common.sh@653 -- # waitforlisten 67429 00:06:14.796 11:02:25 -- common/autotest_common.sh@829 -- # '[' -z 67429 ']' 00:06:14.796 11:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.796 11:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.796 11:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.796 11:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.796 11:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67429) - No such process 00:06:14.796 ERROR: process (pid: 67429) is no longer running 00:06:14.796 11:02:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.796 11:02:25 -- common/autotest_common.sh@862 -- # return 1 00:06:14.796 11:02:25 -- common/autotest_common.sh@653 -- # es=1 00:06:14.797 11:02:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.797 11:02:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.797 11:02:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.797 11:02:25 -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.797 11:02:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.797 11:02:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.797 11:02:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.797 00:06:14.797 real 0m1.549s 00:06:14.797 user 0m1.774s 00:06:14.797 sys 0m0.368s 00:06:14.797 11:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.797 11:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.797 ************************************ 00:06:14.797 END TEST default_locks 00:06:14.797 ************************************ 00:06:14.797 11:02:25 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.797 11:02:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.797 11:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.797 11:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.797 ************************************ 00:06:14.797 START TEST default_locks_via_rpc 00:06:14.797 ************************************ 00:06:14.797 11:02:25 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:14.797 11:02:25 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67481 00:06:14.797 11:02:25 -- event/cpu_locks.sh@63 -- # waitforlisten 67481 00:06:14.797 11:02:25 -- common/autotest_common.sh@829 -- # '[' -z 67481 ']' 00:06:14.797 11:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.797 11:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.797 11:02:25 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.797 11:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.797 11:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.797 11:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.797 [2024-12-06 11:02:25.864333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.797 [2024-12-06 11:02:25.864452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67481 ] 00:06:15.054 [2024-12-06 11:02:25.994910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.054 [2024-12-06 11:02:26.025842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.054 [2024-12-06 11:02:26.026021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.986 11:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.986 11:02:26 -- common/autotest_common.sh@862 -- # return 0 00:06:15.986 11:02:26 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.986 11:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.986 11:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.986 11:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.986 11:02:26 -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.986 11:02:26 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.986 11:02:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.987 11:02:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.987 11:02:26 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.987 11:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.987 11:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.987 11:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.987 11:02:26 -- event/cpu_locks.sh@71 -- # locks_exist 67481 00:06:15.987 11:02:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.987 11:02:26 -- event/cpu_locks.sh@22 -- # lslocks -p 67481 00:06:16.245 11:02:27 -- event/cpu_locks.sh@73 -- # killprocess 67481 00:06:16.245 11:02:27 -- common/autotest_common.sh@936 -- # '[' -z 67481 ']' 00:06:16.245 11:02:27 -- common/autotest_common.sh@940 -- # kill -0 67481 00:06:16.245 11:02:27 -- common/autotest_common.sh@941 -- # uname 00:06:16.245 11:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.245 11:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67481 00:06:16.245 11:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.245 11:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.245 killing process with pid 67481 00:06:16.245 11:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67481' 00:06:16.245 11:02:27 -- common/autotest_common.sh@955 -- # kill 67481 00:06:16.245 11:02:27 -- common/autotest_common.sh@960 -- # wait 67481 00:06:16.502 00:06:16.503 real 0m1.711s 00:06:16.503 user 0m1.949s 00:06:16.503 sys 0m0.442s 00:06:16.503 11:02:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.503 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:16.503 ************************************ 00:06:16.503 END TEST default_locks_via_rpc 00:06:16.503 ************************************ 00:06:16.503 11:02:27 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.503 11:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.503 11:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.503 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:16.503 ************************************ 00:06:16.503 START TEST non_locking_app_on_locked_coremask 00:06:16.503 ************************************ 00:06:16.503 11:02:27 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:16.503 11:02:27 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67526 00:06:16.503 11:02:27 -- event/cpu_locks.sh@81 -- # waitforlisten 67526 /var/tmp/spdk.sock 00:06:16.503 11:02:27 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.503 11:02:27 -- common/autotest_common.sh@829 -- # '[' -z 67526 ']' 00:06:16.503 11:02:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.503 11:02:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.503 11:02:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.503 11:02:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.503 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:16.503 [2024-12-06 11:02:27.625500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.503 [2024-12-06 11:02:27.625641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67526 ] 00:06:16.761 [2024-12-06 11:02:27.764918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.761 [2024-12-06 11:02:27.801630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.761 [2024-12-06 11:02:27.801791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.695 11:02:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.695 11:02:28 -- common/autotest_common.sh@862 -- # return 0 00:06:17.695 11:02:28 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67542 00:06:17.695 11:02:28 -- event/cpu_locks.sh@85 -- # waitforlisten 67542 /var/tmp/spdk2.sock 00:06:17.695 11:02:28 -- common/autotest_common.sh@829 -- # '[' -z 67542 ']' 00:06:17.695 11:02:28 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.695 11:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.695 11:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.695 11:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.695 11:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.695 11:02:28 -- common/autotest_common.sh@10 -- # set +x 00:06:17.695 [2024-12-06 11:02:28.629292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.695 [2024-12-06 11:02:28.629397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67542 ] 00:06:17.695 [2024-12-06 11:02:28.771327] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.695 [2024-12-06 11:02:28.771387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.695 [2024-12-06 11:02:28.836502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.953 [2024-12-06 11:02:28.843781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.519 11:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.519 11:02:29 -- common/autotest_common.sh@862 -- # return 0 00:06:18.519 11:02:29 -- event/cpu_locks.sh@87 -- # locks_exist 67526 00:06:18.519 11:02:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.519 11:02:29 -- event/cpu_locks.sh@22 -- # lslocks -p 67526 00:06:19.086 11:02:30 -- event/cpu_locks.sh@89 -- # killprocess 67526 00:06:19.086 11:02:30 -- common/autotest_common.sh@936 -- # '[' -z 67526 ']' 00:06:19.086 11:02:30 -- common/autotest_common.sh@940 -- # kill -0 67526 00:06:19.086 11:02:30 -- common/autotest_common.sh@941 -- # uname 00:06:19.086 11:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.086 11:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67526 00:06:19.086 11:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.086 11:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.086 killing process with pid 67526 00:06:19.086 11:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67526' 00:06:19.086 11:02:30 -- common/autotest_common.sh@955 -- # kill 67526 00:06:19.086 11:02:30 -- common/autotest_common.sh@960 -- # wait 67526 00:06:19.655 11:02:30 -- event/cpu_locks.sh@90 -- # killprocess 67542 00:06:19.655 11:02:30 -- common/autotest_common.sh@936 -- # '[' -z 67542 ']' 00:06:19.655 11:02:30 -- common/autotest_common.sh@940 -- # kill -0 67542 00:06:19.655 11:02:30 -- common/autotest_common.sh@941 -- # uname 00:06:19.655 11:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.655 11:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67542 00:06:19.655 11:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.655 11:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.655 killing process with pid 67542 00:06:19.655 11:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67542' 00:06:19.655 11:02:30 -- common/autotest_common.sh@955 -- # kill 67542 00:06:19.655 11:02:30 -- common/autotest_common.sh@960 -- # wait 67542 00:06:19.914 00:06:19.914 real 0m3.277s 00:06:19.914 user 0m3.890s 00:06:19.914 sys 0m0.793s 00:06:19.914 11:02:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.914 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:19.914 ************************************ 00:06:19.914 END TEST non_locking_app_on_locked_coremask 00:06:19.914 ************************************ 00:06:19.914 11:02:30 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.914 11:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.914 11:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.914 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:19.914 ************************************ 00:06:19.914 START TEST locking_app_on_unlocked_coremask 00:06:19.914 ************************************ 00:06:19.914 11:02:30 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:19.914 11:02:30 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67604 00:06:19.914 11:02:30 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.914 11:02:30 -- event/cpu_locks.sh@99 -- # waitforlisten 67604 /var/tmp/spdk.sock 00:06:19.914 11:02:30 -- common/autotest_common.sh@829 -- # '[' -z 67604 ']' 00:06:19.914 11:02:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.914 11:02:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.914 11:02:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.914 11:02:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.914 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:19.914 [2024-12-06 11:02:30.956341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.914 [2024-12-06 11:02:30.956445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67604 ] 00:06:20.172 [2024-12-06 11:02:31.094915] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.172 [2024-12-06 11:02:31.094964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.172 [2024-12-06 11:02:31.125954] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.172 [2024-12-06 11:02:31.126120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.741 11:02:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.741 11:02:31 -- common/autotest_common.sh@862 -- # return 0 00:06:20.741 11:02:31 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67614 00:06:20.741 11:02:31 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.741 11:02:31 -- event/cpu_locks.sh@103 -- # waitforlisten 67614 /var/tmp/spdk2.sock 00:06:20.741 11:02:31 -- common/autotest_common.sh@829 -- # '[' -z 67614 ']' 00:06:20.741 11:02:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.741 11:02:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.741 11:02:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.741 11:02:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.741 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:06:20.999 [2024-12-06 11:02:31.917427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.999 [2024-12-06 11:02:31.917523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67614 ] 00:06:20.999 [2024-12-06 11:02:32.058925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.999 [2024-12-06 11:02:32.121763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.999 [2024-12-06 11:02:32.121922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.934 11:02:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.934 11:02:32 -- common/autotest_common.sh@862 -- # return 0 00:06:21.934 11:02:32 -- event/cpu_locks.sh@105 -- # locks_exist 67614 00:06:21.934 11:02:32 -- event/cpu_locks.sh@22 -- # lslocks -p 67614 00:06:21.934 11:02:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.869 11:02:33 -- event/cpu_locks.sh@107 -- # killprocess 67604 00:06:22.869 11:02:33 -- common/autotest_common.sh@936 -- # '[' -z 67604 ']' 00:06:22.869 11:02:33 -- common/autotest_common.sh@940 -- # kill -0 67604 00:06:22.869 11:02:33 -- common/autotest_common.sh@941 -- # uname 00:06:22.869 11:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.869 11:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67604 00:06:22.869 11:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.869 11:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.869 killing process with pid 67604 00:06:22.869 11:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67604' 00:06:22.869 11:02:33 -- common/autotest_common.sh@955 -- # kill 67604 00:06:22.869 11:02:33 -- common/autotest_common.sh@960 -- # wait 67604 00:06:23.127 11:02:34 -- event/cpu_locks.sh@108 -- # killprocess 67614 00:06:23.127 11:02:34 -- common/autotest_common.sh@936 -- # '[' -z 67614 ']' 00:06:23.127 11:02:34 -- common/autotest_common.sh@940 -- # kill -0 67614 00:06:23.127 11:02:34 -- common/autotest_common.sh@941 -- # uname 00:06:23.127 11:02:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.127 11:02:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67614 00:06:23.127 11:02:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.127 11:02:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.127 killing process with pid 67614 00:06:23.127 11:02:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67614' 00:06:23.127 11:02:34 -- common/autotest_common.sh@955 -- # kill 67614 00:06:23.127 11:02:34 -- common/autotest_common.sh@960 -- # wait 67614 00:06:23.386 00:06:23.386 real 0m3.507s 00:06:23.386 user 0m4.184s 00:06:23.386 sys 0m0.871s 00:06:23.386 11:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.386 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.386 ************************************ 00:06:23.386 END TEST locking_app_on_unlocked_coremask 00:06:23.386 ************************************ 00:06:23.386 11:02:34 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.386 11:02:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.386 11:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.386 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.386 ************************************ 00:06:23.386 START TEST locking_app_on_locked_coremask 00:06:23.386 ************************************ 00:06:23.386 11:02:34 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:23.386 11:02:34 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67676 00:06:23.386 11:02:34 -- event/cpu_locks.sh@116 -- # waitforlisten 67676 /var/tmp/spdk.sock 00:06:23.386 11:02:34 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.386 11:02:34 -- common/autotest_common.sh@829 -- # '[' -z 67676 ']' 00:06:23.386 11:02:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.386 11:02:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.386 11:02:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.386 11:02:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.386 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.386 [2024-12-06 11:02:34.501327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.386 [2024-12-06 11:02:34.501431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67676 ] 00:06:23.645 [2024-12-06 11:02:34.635019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.645 [2024-12-06 11:02:34.665393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.645 [2024-12-06 11:02:34.665592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.580 11:02:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.580 11:02:35 -- common/autotest_common.sh@862 -- # return 0 00:06:24.580 11:02:35 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67692 00:06:24.580 11:02:35 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.580 11:02:35 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67692 /var/tmp/spdk2.sock 00:06:24.580 11:02:35 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.580 11:02:35 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67692 /var/tmp/spdk2.sock 00:06:24.580 11:02:35 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.580 11:02:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.580 11:02:35 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.580 11:02:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.580 11:02:35 -- common/autotest_common.sh@653 -- # waitforlisten 67692 /var/tmp/spdk2.sock 00:06:24.580 11:02:35 -- common/autotest_common.sh@829 -- # '[' -z 67692 ']' 00:06:24.580 11:02:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.580 11:02:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.580 11:02:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.580 11:02:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.580 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.580 [2024-12-06 11:02:35.578386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.580 [2024-12-06 11:02:35.579010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ] 00:06:24.580 [2024-12-06 11:02:35.718584] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67676 has claimed it. 00:06:24.580 [2024-12-06 11:02:35.718649] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.146 ERROR: process (pid: 67692) is no longer running 00:06:25.146 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67692) - No such process 00:06:25.146 11:02:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.146 11:02:36 -- common/autotest_common.sh@862 -- # return 1 00:06:25.146 11:02:36 -- common/autotest_common.sh@653 -- # es=1 00:06:25.146 11:02:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.146 11:02:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.146 11:02:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.146 11:02:36 -- event/cpu_locks.sh@122 -- # locks_exist 67676 00:06:25.146 11:02:36 -- event/cpu_locks.sh@22 -- # lslocks -p 67676 00:06:25.146 11:02:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.754 11:02:36 -- event/cpu_locks.sh@124 -- # killprocess 67676 00:06:25.754 11:02:36 -- common/autotest_common.sh@936 -- # '[' -z 67676 ']' 00:06:25.754 11:02:36 -- common/autotest_common.sh@940 -- # kill -0 67676 00:06:25.754 11:02:36 -- common/autotest_common.sh@941 -- # uname 00:06:25.754 11:02:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.754 11:02:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67676 00:06:25.754 11:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.754 11:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.754 killing process with pid 67676 00:06:25.754 11:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67676' 00:06:25.754 11:02:36 -- common/autotest_common.sh@955 -- # kill 67676 00:06:25.754 11:02:36 -- common/autotest_common.sh@960 -- # wait 67676 00:06:26.043 00:06:26.043 real 0m2.488s 00:06:26.043 user 0m3.053s 00:06:26.043 sys 0m0.512s 00:06:26.043 11:02:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.043 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:26.043 ************************************ 00:06:26.043 END TEST locking_app_on_locked_coremask 00:06:26.043 ************************************ 00:06:26.043 11:02:36 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.043 11:02:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.043 11:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.043 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:26.043 ************************************ 00:06:26.043 START TEST locking_overlapped_coremask 00:06:26.043 ************************************ 00:06:26.043 11:02:36 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:26.043 11:02:36 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67743 00:06:26.043 11:02:36 -- event/cpu_locks.sh@133 -- # waitforlisten 67743 /var/tmp/spdk.sock 00:06:26.043 11:02:36 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.043 11:02:36 -- common/autotest_common.sh@829 -- # '[' -z 67743 ']' 00:06:26.043 11:02:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.043 11:02:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.043 11:02:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.043 11:02:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.043 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:06:26.043 [2024-12-06 11:02:37.055618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.043 [2024-12-06 11:02:37.055739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67743 ] 00:06:26.302 [2024-12-06 11:02:37.195522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.302 [2024-12-06 11:02:37.230893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.302 [2024-12-06 11:02:37.231140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.302 [2024-12-06 11:02:37.231285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.302 [2024-12-06 11:02:37.231287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.238 11:02:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.238 11:02:38 -- common/autotest_common.sh@862 -- # return 0 00:06:27.238 11:02:38 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:27.238 11:02:38 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67761 00:06:27.238 11:02:38 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67761 /var/tmp/spdk2.sock 00:06:27.238 11:02:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.238 11:02:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67761 /var/tmp/spdk2.sock 00:06:27.238 11:02:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.238 11:02:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.238 11:02:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.238 11:02:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.238 11:02:38 -- common/autotest_common.sh@653 -- # waitforlisten 67761 /var/tmp/spdk2.sock 00:06:27.238 11:02:38 -- common/autotest_common.sh@829 -- # '[' -z 67761 ']' 00:06:27.238 11:02:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.238 11:02:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.238 11:02:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.238 11:02:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.238 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.238 [2024-12-06 11:02:38.100784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.238 [2024-12-06 11:02:38.100876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67761 ] 00:06:27.238 [2024-12-06 11:02:38.237670] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67743 has claimed it. 00:06:27.238 [2024-12-06 11:02:38.237739] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.807 ERROR: process (pid: 67761) is no longer running 00:06:27.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67761) - No such process 00:06:27.807 11:02:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.807 11:02:38 -- common/autotest_common.sh@862 -- # return 1 00:06:27.807 11:02:38 -- common/autotest_common.sh@653 -- # es=1 00:06:27.807 11:02:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.807 11:02:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.807 11:02:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.807 11:02:38 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.807 11:02:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.807 11:02:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.807 11:02:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.807 11:02:38 -- event/cpu_locks.sh@141 -- # killprocess 67743 00:06:27.807 11:02:38 -- common/autotest_common.sh@936 -- # '[' -z 67743 ']' 00:06:27.807 11:02:38 -- common/autotest_common.sh@940 -- # kill -0 67743 00:06:27.807 11:02:38 -- common/autotest_common.sh@941 -- # uname 00:06:27.807 11:02:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.807 11:02:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67743 00:06:27.807 11:02:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.807 11:02:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.807 killing process with pid 67743 00:06:27.807 11:02:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67743' 00:06:27.808 11:02:38 -- common/autotest_common.sh@955 -- # kill 67743 00:06:27.808 11:02:38 -- common/autotest_common.sh@960 -- # wait 67743 00:06:28.066 00:06:28.066 real 0m2.097s 00:06:28.066 user 0m6.156s 00:06:28.066 sys 0m0.305s 00:06:28.067 11:02:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.067 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.067 ************************************ 00:06:28.067 END TEST locking_overlapped_coremask 00:06:28.067 ************************************ 00:06:28.067 11:02:39 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.067 11:02:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.067 11:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.067 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.067 ************************************ 00:06:28.067 START TEST locking_overlapped_coremask_via_rpc 00:06:28.067 ************************************ 00:06:28.067 11:02:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:28.067 11:02:39 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67801 00:06:28.067 11:02:39 -- event/cpu_locks.sh@149 -- # waitforlisten 67801 /var/tmp/spdk.sock 00:06:28.067 11:02:39 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.067 11:02:39 -- common/autotest_common.sh@829 -- # '[' -z 67801 ']' 00:06:28.067 11:02:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.067 11:02:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.067 11:02:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.067 11:02:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.067 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.067 [2024-12-06 11:02:39.205866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.067 [2024-12-06 11:02:39.205983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67801 ] 00:06:28.326 [2024-12-06 11:02:39.344387] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.326 [2024-12-06 11:02:39.344440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.326 [2024-12-06 11:02:39.375475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.326 [2024-12-06 11:02:39.375943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.326 [2024-12-06 11:02:39.376078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.326 [2024-12-06 11:02:39.376082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.263 11:02:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.263 11:02:40 -- common/autotest_common.sh@862 -- # return 0 00:06:29.263 11:02:40 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67819 00:06:29.263 11:02:40 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.263 11:02:40 -- event/cpu_locks.sh@153 -- # waitforlisten 67819 /var/tmp/spdk2.sock 00:06:29.263 11:02:40 -- common/autotest_common.sh@829 -- # '[' -z 67819 ']' 00:06:29.263 11:02:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.263 11:02:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.263 11:02:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.263 11:02:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.263 11:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.263 [2024-12-06 11:02:40.231251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.263 [2024-12-06 11:02:40.231357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67819 ] 00:06:29.263 [2024-12-06 11:02:40.378251] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.263 [2024-12-06 11:02:40.378293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.522 [2024-12-06 11:02:40.450938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.522 [2024-12-06 11:02:40.451527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.522 [2024-12-06 11:02:40.451696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.522 [2024-12-06 11:02:40.451705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.089 11:02:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.089 11:02:41 -- common/autotest_common.sh@862 -- # return 0 00:06:30.089 11:02:41 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.089 11:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.089 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.089 11:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.089 11:02:41 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.089 11:02:41 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.089 11:02:41 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.089 11:02:41 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:30.089 11:02:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.089 11:02:41 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:30.089 11:02:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.089 11:02:41 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.089 11:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.089 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.089 [2024-12-06 11:02:41.163767] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67801 has claimed it. 00:06:30.089 request: 00:06:30.089 { 00:06:30.089 "method": "framework_enable_cpumask_locks", 00:06:30.089 "req_id": 1 00:06:30.089 } 00:06:30.089 Got JSON-RPC error response 00:06:30.089 response: 00:06:30.089 { 00:06:30.089 "code": -32603, 00:06:30.089 "message": "Failed to claim CPU core: 2" 00:06:30.089 } 00:06:30.089 11:02:41 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:30.089 11:02:41 -- common/autotest_common.sh@653 -- # es=1 00:06:30.089 11:02:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.089 11:02:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.089 11:02:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.089 11:02:41 -- event/cpu_locks.sh@158 -- # waitforlisten 67801 /var/tmp/spdk.sock 00:06:30.089 11:02:41 -- common/autotest_common.sh@829 -- # '[' -z 67801 ']' 00:06:30.089 11:02:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.089 11:02:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.089 11:02:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.089 11:02:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.089 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.348 11:02:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.348 11:02:41 -- common/autotest_common.sh@862 -- # return 0 00:06:30.348 11:02:41 -- event/cpu_locks.sh@159 -- # waitforlisten 67819 /var/tmp/spdk2.sock 00:06:30.348 11:02:41 -- common/autotest_common.sh@829 -- # '[' -z 67819 ']' 00:06:30.348 11:02:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.348 11:02:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.348 11:02:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.348 11:02:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.348 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.607 11:02:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.607 11:02:41 -- common/autotest_common.sh@862 -- # return 0 00:06:30.607 11:02:41 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.608 11:02:41 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.608 11:02:41 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.608 11:02:41 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.608 00:06:30.608 real 0m2.487s 00:06:30.608 user 0m1.265s 00:06:30.608 sys 0m0.155s 00:06:30.608 11:02:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.608 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.608 ************************************ 00:06:30.608 END TEST locking_overlapped_coremask_via_rpc 00:06:30.608 ************************************ 00:06:30.608 11:02:41 -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.608 11:02:41 -- event/cpu_locks.sh@15 -- # [[ -z 67801 ]] 00:06:30.608 11:02:41 -- event/cpu_locks.sh@15 -- # killprocess 67801 00:06:30.608 11:02:41 -- common/autotest_common.sh@936 -- # '[' -z 67801 ']' 00:06:30.608 11:02:41 -- common/autotest_common.sh@940 -- # kill -0 67801 00:06:30.608 11:02:41 -- common/autotest_common.sh@941 -- # uname 00:06:30.608 11:02:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.608 11:02:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67801 00:06:30.608 11:02:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.608 11:02:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.608 11:02:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67801' 00:06:30.608 killing process with pid 67801 00:06:30.608 11:02:41 -- common/autotest_common.sh@955 -- # kill 67801 00:06:30.608 11:02:41 -- common/autotest_common.sh@960 -- # wait 67801 00:06:30.868 11:02:41 -- event/cpu_locks.sh@16 -- # [[ -z 67819 ]] 00:06:30.868 11:02:41 -- event/cpu_locks.sh@16 -- # killprocess 67819 00:06:30.868 11:02:41 -- common/autotest_common.sh@936 -- # '[' -z 67819 ']' 00:06:30.868 11:02:41 -- common/autotest_common.sh@940 -- # kill -0 67819 00:06:30.868 11:02:41 -- common/autotest_common.sh@941 -- # uname 00:06:30.868 11:02:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.868 11:02:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67819 00:06:30.868 11:02:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:30.868 11:02:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:30.868 killing process with pid 67819 00:06:30.868 11:02:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67819' 00:06:30.868 11:02:41 -- common/autotest_common.sh@955 -- # kill 67819 00:06:30.868 11:02:41 -- common/autotest_common.sh@960 -- # wait 67819 00:06:31.128 11:02:42 -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.128 11:02:42 -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.128 11:02:42 -- event/cpu_locks.sh@15 -- # [[ -z 67801 ]] 00:06:31.128 11:02:42 -- event/cpu_locks.sh@15 -- # killprocess 67801 00:06:31.128 11:02:42 -- common/autotest_common.sh@936 -- # '[' -z 67801 ']' 00:06:31.128 11:02:42 -- common/autotest_common.sh@940 -- # kill -0 67801 00:06:31.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67801) - No such process 00:06:31.128 Process with pid 67801 is not found 00:06:31.128 11:02:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67801 is not found' 00:06:31.128 11:02:42 -- event/cpu_locks.sh@16 -- # [[ -z 67819 ]] 00:06:31.128 11:02:42 -- event/cpu_locks.sh@16 -- # killprocess 67819 00:06:31.128 11:02:42 -- common/autotest_common.sh@936 -- # '[' -z 67819 ']' 00:06:31.128 11:02:42 -- common/autotest_common.sh@940 -- # kill -0 67819 00:06:31.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67819) - No such process 00:06:31.128 Process with pid 67819 is not found 00:06:31.128 11:02:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67819 is not found' 00:06:31.128 11:02:42 -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.128 00:06:31.128 real 0m18.185s 00:06:31.128 user 0m33.541s 00:06:31.128 sys 0m4.087s 00:06:31.128 11:02:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.128 ************************************ 00:06:31.128 END TEST cpu_locks 00:06:31.128 ************************************ 00:06:31.128 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.128 00:06:31.128 real 0m44.154s 00:06:31.128 user 1m26.706s 00:06:31.128 sys 0m7.199s 00:06:31.128 11:02:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.128 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.128 ************************************ 00:06:31.128 END TEST event 00:06:31.128 ************************************ 00:06:31.387 11:02:42 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.387 11:02:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.387 11:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.387 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 ************************************ 00:06:31.387 START TEST thread 00:06:31.387 ************************************ 00:06:31.387 11:02:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.387 * Looking for test storage... 00:06:31.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.387 11:02:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:31.387 11:02:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:31.387 11:02:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:31.387 11:02:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:31.387 11:02:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:31.387 11:02:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:31.387 11:02:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:31.387 11:02:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:31.387 11:02:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:31.387 11:02:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.387 11:02:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:31.387 11:02:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:31.387 11:02:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:31.387 11:02:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:31.387 11:02:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:31.387 11:02:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:31.387 11:02:42 -- scripts/common.sh@344 -- # : 1 00:06:31.387 11:02:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:31.387 11:02:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.387 11:02:42 -- scripts/common.sh@364 -- # decimal 1 00:06:31.387 11:02:42 -- scripts/common.sh@352 -- # local d=1 00:06:31.387 11:02:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.387 11:02:42 -- scripts/common.sh@354 -- # echo 1 00:06:31.387 11:02:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:31.387 11:02:42 -- scripts/common.sh@365 -- # decimal 2 00:06:31.387 11:02:42 -- scripts/common.sh@352 -- # local d=2 00:06:31.387 11:02:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.387 11:02:42 -- scripts/common.sh@354 -- # echo 2 00:06:31.387 11:02:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:31.387 11:02:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:31.387 11:02:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:31.387 11:02:42 -- scripts/common.sh@367 -- # return 0 00:06:31.387 11:02:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.387 11:02:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.387 --rc genhtml_branch_coverage=1 00:06:31.388 --rc genhtml_function_coverage=1 00:06:31.388 --rc genhtml_legend=1 00:06:31.388 --rc geninfo_all_blocks=1 00:06:31.388 --rc geninfo_unexecuted_blocks=1 00:06:31.388 00:06:31.388 ' 00:06:31.388 11:02:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:31.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.388 --rc genhtml_branch_coverage=1 00:06:31.388 --rc genhtml_function_coverage=1 00:06:31.388 --rc genhtml_legend=1 00:06:31.388 --rc geninfo_all_blocks=1 00:06:31.388 --rc geninfo_unexecuted_blocks=1 00:06:31.388 00:06:31.388 ' 00:06:31.388 11:02:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:31.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.388 --rc genhtml_branch_coverage=1 00:06:31.388 --rc genhtml_function_coverage=1 00:06:31.388 --rc genhtml_legend=1 00:06:31.388 --rc geninfo_all_blocks=1 00:06:31.388 --rc geninfo_unexecuted_blocks=1 00:06:31.388 00:06:31.388 ' 00:06:31.388 11:02:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:31.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.388 --rc genhtml_branch_coverage=1 00:06:31.388 --rc genhtml_function_coverage=1 00:06:31.388 --rc genhtml_legend=1 00:06:31.388 --rc geninfo_all_blocks=1 00:06:31.388 --rc geninfo_unexecuted_blocks=1 00:06:31.388 00:06:31.388 ' 00:06:31.388 11:02:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.388 11:02:42 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:31.388 11:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.388 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.388 ************************************ 00:06:31.388 START TEST thread_poller_perf 00:06:31.388 ************************************ 00:06:31.388 11:02:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.388 [2024-12-06 11:02:42.478794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.388 [2024-12-06 11:02:42.478905] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67943 ] 00:06:31.647 [2024-12-06 11:02:42.616508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.647 [2024-12-06 11:02:42.647781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.647 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.583 [2024-12-06T11:02:43.730Z] ====================================== 00:06:32.583 [2024-12-06T11:02:43.730Z] busy:2206320102 (cyc) 00:06:32.583 [2024-12-06T11:02:43.730Z] total_run_count: 360000 00:06:32.583 [2024-12-06T11:02:43.730Z] tsc_hz: 2200000000 (cyc) 00:06:32.583 [2024-12-06T11:02:43.730Z] ====================================== 00:06:32.583 [2024-12-06T11:02:43.730Z] poller_cost: 6128 (cyc), 2785 (nsec) 00:06:32.583 00:06:32.583 real 0m1.240s 00:06:32.583 user 0m1.088s 00:06:32.583 sys 0m0.045s 00:06:32.583 11:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.583 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.583 ************************************ 00:06:32.583 END TEST thread_poller_perf 00:06:32.583 ************************************ 00:06:32.842 11:02:43 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.842 11:02:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:32.842 11:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.842 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.842 ************************************ 00:06:32.842 START TEST thread_poller_perf 00:06:32.842 ************************************ 00:06:32.842 11:02:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.842 [2024-12-06 11:02:43.770148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.842 [2024-12-06 11:02:43.770252] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67975 ] 00:06:32.842 [2024-12-06 11:02:43.893006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.842 [2024-12-06 11:02:43.922870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.842 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:34.217 [2024-12-06T11:02:45.364Z] ====================================== 00:06:34.217 [2024-12-06T11:02:45.364Z] busy:2202532525 (cyc) 00:06:34.217 [2024-12-06T11:02:45.364Z] total_run_count: 4820000 00:06:34.217 [2024-12-06T11:02:45.364Z] tsc_hz: 2200000000 (cyc) 00:06:34.217 [2024-12-06T11:02:45.364Z] ====================================== 00:06:34.217 [2024-12-06T11:02:45.364Z] poller_cost: 456 (cyc), 207 (nsec) 00:06:34.217 00:06:34.217 real 0m1.219s 00:06:34.217 user 0m1.077s 00:06:34.217 sys 0m0.035s 00:06:34.217 11:02:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.217 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:06:34.217 ************************************ 00:06:34.217 END TEST thread_poller_perf 00:06:34.217 ************************************ 00:06:34.217 11:02:45 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.217 00:06:34.217 real 0m2.725s 00:06:34.217 user 0m2.299s 00:06:34.217 sys 0m0.212s 00:06:34.217 11:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.217 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.217 ************************************ 00:06:34.217 END TEST thread 00:06:34.217 ************************************ 00:06:34.217 11:02:45 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:34.217 11:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.218 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.218 ************************************ 00:06:34.218 START TEST accel 00:06:34.218 ************************************ 00:06:34.218 11:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:34.218 * Looking for test storage... 00:06:34.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:34.218 11:02:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:34.218 11:02:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:34.218 11:02:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:34.218 11:02:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:34.218 11:02:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:34.218 11:02:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:34.218 11:02:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:34.218 11:02:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:34.218 11:02:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.218 11:02:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:34.218 11:02:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:34.218 11:02:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:34.218 11:02:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:34.218 11:02:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:34.218 11:02:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:34.218 11:02:45 -- scripts/common.sh@344 -- # : 1 00:06:34.218 11:02:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:34.218 11:02:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.218 11:02:45 -- scripts/common.sh@364 -- # decimal 1 00:06:34.218 11:02:45 -- scripts/common.sh@352 -- # local d=1 00:06:34.218 11:02:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.218 11:02:45 -- scripts/common.sh@354 -- # echo 1 00:06:34.218 11:02:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:34.218 11:02:45 -- scripts/common.sh@365 -- # decimal 2 00:06:34.218 11:02:45 -- scripts/common.sh@352 -- # local d=2 00:06:34.218 11:02:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.218 11:02:45 -- scripts/common.sh@354 -- # echo 2 00:06:34.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.218 11:02:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:34.218 11:02:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:34.218 11:02:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:34.218 11:02:45 -- scripts/common.sh@367 -- # return 0 00:06:34.218 11:02:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:34.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.218 --rc genhtml_branch_coverage=1 00:06:34.218 --rc genhtml_function_coverage=1 00:06:34.218 --rc genhtml_legend=1 00:06:34.218 --rc geninfo_all_blocks=1 00:06:34.218 --rc geninfo_unexecuted_blocks=1 00:06:34.218 00:06:34.218 ' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:34.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.218 --rc genhtml_branch_coverage=1 00:06:34.218 --rc genhtml_function_coverage=1 00:06:34.218 --rc genhtml_legend=1 00:06:34.218 --rc geninfo_all_blocks=1 00:06:34.218 --rc geninfo_unexecuted_blocks=1 00:06:34.218 00:06:34.218 ' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:34.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.218 --rc genhtml_branch_coverage=1 00:06:34.218 --rc genhtml_function_coverage=1 00:06:34.218 --rc genhtml_legend=1 00:06:34.218 --rc geninfo_all_blocks=1 00:06:34.218 --rc geninfo_unexecuted_blocks=1 00:06:34.218 00:06:34.218 ' 00:06:34.218 11:02:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:34.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.218 --rc genhtml_branch_coverage=1 00:06:34.218 --rc genhtml_function_coverage=1 00:06:34.218 --rc genhtml_legend=1 00:06:34.218 --rc geninfo_all_blocks=1 00:06:34.218 --rc geninfo_unexecuted_blocks=1 00:06:34.218 00:06:34.218 ' 00:06:34.218 11:02:45 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:34.218 11:02:45 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:34.218 11:02:45 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.218 11:02:45 -- accel/accel.sh@59 -- # spdk_tgt_pid=68062 00:06:34.218 11:02:45 -- accel/accel.sh@60 -- # waitforlisten 68062 00:06:34.218 11:02:45 -- common/autotest_common.sh@829 -- # '[' -z 68062 ']' 00:06:34.218 11:02:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.218 11:02:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.218 11:02:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.218 11:02:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.218 11:02:45 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:34.218 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.218 11:02:45 -- accel/accel.sh@58 -- # build_accel_config 00:06:34.218 11:02:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.218 11:02:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.218 11:02:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.218 11:02:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.218 11:02:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.218 11:02:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.218 11:02:45 -- accel/accel.sh@42 -- # jq -r . 00:06:34.218 [2024-12-06 11:02:45.310596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.218 [2024-12-06 11:02:45.310705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68062 ] 00:06:34.476 [2024-12-06 11:02:45.448224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.476 [2024-12-06 11:02:45.479392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.476 [2024-12-06 11:02:45.479534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.408 11:02:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.408 11:02:46 -- common/autotest_common.sh@862 -- # return 0 00:06:35.408 11:02:46 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:35.408 11:02:46 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:35.408 11:02:46 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:35.408 11:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.408 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.408 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.408 11:02:46 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # IFS== 00:06:35.408 11:02:46 -- accel/accel.sh@64 -- # read -r opc module 00:06:35.408 11:02:46 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:35.409 11:02:46 -- accel/accel.sh@67 -- # killprocess 68062 00:06:35.409 11:02:46 -- common/autotest_common.sh@936 -- # '[' -z 68062 ']' 00:06:35.409 11:02:46 -- common/autotest_common.sh@940 -- # kill -0 68062 00:06:35.409 11:02:46 -- common/autotest_common.sh@941 -- # uname 00:06:35.409 11:02:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.409 11:02:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68062 00:06:35.409 11:02:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.409 killing process with pid 68062 00:06:35.409 11:02:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.409 11:02:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68062' 00:06:35.409 11:02:46 -- common/autotest_common.sh@955 -- # kill 68062 00:06:35.409 11:02:46 -- common/autotest_common.sh@960 -- # wait 68062 00:06:35.667 11:02:46 -- accel/accel.sh@68 -- # trap - ERR 00:06:35.667 11:02:46 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:35.667 11:02:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:35.667 11:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.667 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.667 11:02:46 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:35.667 11:02:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:35.667 11:02:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.667 11:02:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.667 11:02:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.667 11:02:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.667 11:02:46 -- accel/accel.sh@42 -- # jq -r . 00:06:35.667 11:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.667 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.667 11:02:46 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:35.667 11:02:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.667 11:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.667 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.667 ************************************ 00:06:35.667 START TEST accel_missing_filename 00:06:35.667 ************************************ 00:06:35.667 11:02:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:35.667 11:02:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.667 11:02:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:35.667 11:02:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.667 11:02:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.667 11:02:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.667 11:02:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.667 11:02:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:35.667 11:02:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:35.667 11:02:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.667 11:02:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.667 11:02:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.667 11:02:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.667 11:02:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.667 11:02:46 -- accel/accel.sh@42 -- # jq -r . 00:06:35.667 [2024-12-06 11:02:46.700290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.667 [2024-12-06 11:02:46.700390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68109 ] 00:06:35.926 [2024-12-06 11:02:46.836679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.926 [2024-12-06 11:02:46.866203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.926 [2024-12-06 11:02:46.893750] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.926 [2024-12-06 11:02:46.930173] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:35.926 A filename is required. 00:06:35.926 11:02:46 -- common/autotest_common.sh@653 -- # es=234 00:06:35.926 11:02:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.926 11:02:46 -- common/autotest_common.sh@662 -- # es=106 00:06:35.926 11:02:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.926 11:02:46 -- common/autotest_common.sh@670 -- # es=1 00:06:35.926 11:02:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.926 00:06:35.926 real 0m0.306s 00:06:35.926 user 0m0.182s 00:06:35.926 sys 0m0.071s 00:06:35.926 11:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.926 ************************************ 00:06:35.926 END TEST accel_missing_filename 00:06:35.926 ************************************ 00:06:35.926 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.926 11:02:47 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.926 11:02:47 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:35.926 11:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.926 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:35.926 ************************************ 00:06:35.926 START TEST accel_compress_verify 00:06:35.926 ************************************ 00:06:35.926 11:02:47 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.926 11:02:47 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.926 11:02:47 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.926 11:02:47 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:35.926 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.926 11:02:47 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:35.926 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.926 11:02:47 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.926 11:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:35.926 11:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.926 11:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.926 11:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.926 11:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.926 11:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.926 11:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.926 11:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.926 11:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:35.926 [2024-12-06 11:02:47.053868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.926 [2024-12-06 11:02:47.053980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68128 ] 00:06:36.185 [2024-12-06 11:02:47.188441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.185 [2024-12-06 11:02:47.222110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.185 [2024-12-06 11:02:47.253255] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.185 [2024-12-06 11:02:47.290357] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:36.444 00:06:36.444 Compression does not support the verify option, aborting. 00:06:36.444 11:02:47 -- common/autotest_common.sh@653 -- # es=161 00:06:36.444 11:02:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.444 11:02:47 -- common/autotest_common.sh@662 -- # es=33 00:06:36.444 11:02:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.444 11:02:47 -- common/autotest_common.sh@670 -- # es=1 00:06:36.444 11:02:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.444 00:06:36.444 real 0m0.323s 00:06:36.444 user 0m0.190s 00:06:36.444 sys 0m0.079s 00:06:36.444 11:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.444 ************************************ 00:06:36.444 END TEST accel_compress_verify 00:06:36.444 ************************************ 00:06:36.444 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.444 11:02:47 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:36.444 11:02:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.444 11:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.444 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.444 ************************************ 00:06:36.444 START TEST accel_wrong_workload 00:06:36.444 ************************************ 00:06:36.444 11:02:47 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:36.444 11:02:47 -- common/autotest_common.sh@650 -- # local es=0 00:06:36.444 11:02:47 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:36.444 11:02:47 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.444 11:02:47 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:36.444 11:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:36.444 11:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.444 11:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.444 11:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.444 11:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.444 11:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.444 11:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.444 11:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.444 11:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:36.444 Unsupported workload type: foobar 00:06:36.444 [2024-12-06 11:02:47.427816] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:36.444 accel_perf options: 00:06:36.444 [-h help message] 00:06:36.444 [-q queue depth per core] 00:06:36.444 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:36.444 [-T number of threads per core 00:06:36.444 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:36.444 [-t time in seconds] 00:06:36.444 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:36.444 [ dif_verify, , dif_generate, dif_generate_copy 00:06:36.444 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:36.444 [-l for compress/decompress workloads, name of uncompressed input file 00:06:36.444 [-S for crc32c workload, use this seed value (default 0) 00:06:36.444 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:36.444 [-f for fill workload, use this BYTE value (default 255) 00:06:36.444 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:36.444 [-y verify result if this switch is on] 00:06:36.444 [-a tasks to allocate per core (default: same value as -q)] 00:06:36.444 Can be used to spread operations across a wider range of memory. 00:06:36.444 11:02:47 -- common/autotest_common.sh@653 -- # es=1 00:06:36.444 11:02:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.444 11:02:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.444 11:02:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.444 00:06:36.444 real 0m0.032s 00:06:36.444 user 0m0.019s 00:06:36.444 sys 0m0.013s 00:06:36.444 11:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.444 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.444 ************************************ 00:06:36.444 END TEST accel_wrong_workload 00:06:36.444 ************************************ 00:06:36.444 11:02:47 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:36.444 11:02:47 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:36.444 11:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.444 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.444 ************************************ 00:06:36.444 START TEST accel_negative_buffers 00:06:36.444 ************************************ 00:06:36.444 11:02:47 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:36.444 11:02:47 -- common/autotest_common.sh@650 -- # local es=0 00:06:36.444 11:02:47 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:36.444 11:02:47 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:36.444 11:02:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.444 11:02:47 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:36.444 11:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:36.444 11:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.445 11:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.445 11:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.445 11:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.445 11:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:36.445 -x option must be non-negative. 00:06:36.445 [2024-12-06 11:02:47.507554] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:36.445 accel_perf options: 00:06:36.445 [-h help message] 00:06:36.445 [-q queue depth per core] 00:06:36.445 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:36.445 [-T number of threads per core 00:06:36.445 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:36.445 [-t time in seconds] 00:06:36.445 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:36.445 [ dif_verify, , dif_generate, dif_generate_copy 00:06:36.445 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:36.445 [-l for compress/decompress workloads, name of uncompressed input file 00:06:36.445 [-S for crc32c workload, use this seed value (default 0) 00:06:36.445 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:36.445 [-f for fill workload, use this BYTE value (default 255) 00:06:36.445 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:36.445 [-y verify result if this switch is on] 00:06:36.445 [-a tasks to allocate per core (default: same value as -q)] 00:06:36.445 Can be used to spread operations across a wider range of memory. 00:06:36.445 11:02:47 -- common/autotest_common.sh@653 -- # es=1 00:06:36.445 11:02:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.445 11:02:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.445 11:02:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.445 00:06:36.445 real 0m0.030s 00:06:36.445 user 0m0.019s 00:06:36.445 sys 0m0.010s 00:06:36.445 11:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.445 ************************************ 00:06:36.445 END TEST accel_negative_buffers 00:06:36.445 ************************************ 00:06:36.445 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.445 11:02:47 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:36.445 11:02:47 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:36.445 11:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.445 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.445 ************************************ 00:06:36.445 START TEST accel_crc32c 00:06:36.445 ************************************ 00:06:36.445 11:02:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:36.445 11:02:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.445 11:02:47 -- accel/accel.sh@17 -- # local accel_module 00:06:36.445 11:02:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:36.445 11:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:36.445 11:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.445 11:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.445 11:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.445 11:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.445 11:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.445 11:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:36.445 [2024-12-06 11:02:47.584843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.445 [2024-12-06 11:02:47.585316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68192 ] 00:06:36.704 [2024-12-06 11:02:47.722545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.704 [2024-12-06 11:02:47.752576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.095 11:02:48 -- accel/accel.sh@18 -- # out=' 00:06:38.095 SPDK Configuration: 00:06:38.095 Core mask: 0x1 00:06:38.095 00:06:38.095 Accel Perf Configuration: 00:06:38.095 Workload Type: crc32c 00:06:38.095 CRC-32C seed: 32 00:06:38.095 Transfer size: 4096 bytes 00:06:38.095 Vector count 1 00:06:38.095 Module: software 00:06:38.095 Queue depth: 32 00:06:38.095 Allocate depth: 32 00:06:38.095 # threads/core: 1 00:06:38.095 Run time: 1 seconds 00:06:38.095 Verify: Yes 00:06:38.095 00:06:38.095 Running for 1 seconds... 00:06:38.095 00:06:38.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.095 ------------------------------------------------------------------------------------ 00:06:38.095 0,0 530752/s 2073 MiB/s 0 0 00:06:38.095 ==================================================================================== 00:06:38.095 Total 530752/s 2073 MiB/s 0 0' 00:06:38.095 11:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.095 11:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.095 11:02:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.095 11:02:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.095 11:02:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.095 11:02:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.095 11:02:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.095 11:02:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.095 11:02:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.095 11:02:48 -- accel/accel.sh@42 -- # jq -r . 00:06:38.095 [2024-12-06 11:02:48.890077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.095 [2024-12-06 11:02:48.890171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68206 ] 00:06:38.095 [2024-12-06 11:02:49.025856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.095 [2024-12-06 11:02:49.055279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=0x1 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=crc32c 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=32 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=software 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=32 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=32 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=1 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val=Yes 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.095 11:02:49 -- accel/accel.sh@21 -- # val= 00:06:38.095 11:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.095 11:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@21 -- # val= 00:06:39.029 11:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.029 11:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.029 11:02:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.029 11:02:50 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:39.029 11:02:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.029 00:06:39.029 real 0m2.609s 00:06:39.029 user 0m2.269s 00:06:39.029 sys 0m0.143s 00:06:39.029 11:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.029 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.029 ************************************ 00:06:39.029 END TEST accel_crc32c 00:06:39.029 ************************************ 00:06:39.288 11:02:50 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:39.288 11:02:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.288 11:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.288 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.288 ************************************ 00:06:39.288 START TEST accel_crc32c_C2 00:06:39.288 ************************************ 00:06:39.288 11:02:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:39.288 11:02:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.288 11:02:50 -- accel/accel.sh@17 -- # local accel_module 00:06:39.288 11:02:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:39.288 11:02:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:39.288 11:02:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.288 11:02:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.288 11:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.288 11:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.288 11:02:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.288 11:02:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.288 11:02:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.288 11:02:50 -- accel/accel.sh@42 -- # jq -r . 00:06:39.288 [2024-12-06 11:02:50.247246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.288 [2024-12-06 11:02:50.247358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68235 ] 00:06:39.288 [2024-12-06 11:02:50.382655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.288 [2024-12-06 11:02:50.413000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.659 11:02:51 -- accel/accel.sh@18 -- # out=' 00:06:40.659 SPDK Configuration: 00:06:40.659 Core mask: 0x1 00:06:40.659 00:06:40.659 Accel Perf Configuration: 00:06:40.659 Workload Type: crc32c 00:06:40.659 CRC-32C seed: 0 00:06:40.659 Transfer size: 4096 bytes 00:06:40.659 Vector count 2 00:06:40.659 Module: software 00:06:40.659 Queue depth: 32 00:06:40.659 Allocate depth: 32 00:06:40.659 # threads/core: 1 00:06:40.659 Run time: 1 seconds 00:06:40.659 Verify: Yes 00:06:40.659 00:06:40.659 Running for 1 seconds... 00:06:40.659 00:06:40.659 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.659 ------------------------------------------------------------------------------------ 00:06:40.659 0,0 411040/s 3211 MiB/s 0 0 00:06:40.659 ==================================================================================== 00:06:40.659 Total 411040/s 1605 MiB/s 0 0' 00:06:40.659 11:02:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.659 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.659 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.659 11:02:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.659 11:02:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.659 11:02:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.659 11:02:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.659 11:02:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.659 11:02:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.659 11:02:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.659 11:02:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.659 11:02:51 -- accel/accel.sh@42 -- # jq -r . 00:06:40.659 [2024-12-06 11:02:51.542936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.659 [2024-12-06 11:02:51.543054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68260 ] 00:06:40.659 [2024-12-06 11:02:51.671379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.659 [2024-12-06 11:02:51.703016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.659 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.659 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.659 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.659 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.659 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=0x1 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=crc32c 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=0 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=software 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=32 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=32 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=1 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val=Yes 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.660 11:02:51 -- accel/accel.sh@21 -- # val= 00:06:40.660 11:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.660 11:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@21 -- # val= 00:06:42.035 11:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.035 11:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.035 11:02:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.035 11:02:52 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:42.035 11:02:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.035 00:06:42.035 real 0m2.600s 00:06:42.035 user 0m2.270s 00:06:42.035 sys 0m0.132s 00:06:42.035 11:02:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.035 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:42.035 ************************************ 00:06:42.035 END TEST accel_crc32c_C2 00:06:42.035 ************************************ 00:06:42.035 11:02:52 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.035 11:02:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.035 11:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.035 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:42.035 ************************************ 00:06:42.035 START TEST accel_copy 00:06:42.035 ************************************ 00:06:42.035 11:02:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:42.035 11:02:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.035 11:02:52 -- accel/accel.sh@17 -- # local accel_module 00:06:42.035 11:02:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:42.035 11:02:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.036 11:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.036 11:02:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.036 11:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.036 11:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.036 11:02:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.036 11:02:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.036 11:02:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.036 11:02:52 -- accel/accel.sh@42 -- # jq -r . 00:06:42.036 [2024-12-06 11:02:52.895865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.036 [2024-12-06 11:02:52.895957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68289 ] 00:06:42.036 [2024-12-06 11:02:53.032720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.036 [2024-12-06 11:02:53.062849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.411 11:02:54 -- accel/accel.sh@18 -- # out=' 00:06:43.411 SPDK Configuration: 00:06:43.411 Core mask: 0x1 00:06:43.411 00:06:43.411 Accel Perf Configuration: 00:06:43.411 Workload Type: copy 00:06:43.411 Transfer size: 4096 bytes 00:06:43.411 Vector count 1 00:06:43.411 Module: software 00:06:43.411 Queue depth: 32 00:06:43.411 Allocate depth: 32 00:06:43.411 # threads/core: 1 00:06:43.411 Run time: 1 seconds 00:06:43.411 Verify: Yes 00:06:43.411 00:06:43.411 Running for 1 seconds... 00:06:43.411 00:06:43.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.411 ------------------------------------------------------------------------------------ 00:06:43.411 0,0 366496/s 1431 MiB/s 0 0 00:06:43.411 ==================================================================================== 00:06:43.411 Total 366496/s 1431 MiB/s 0 0' 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:43.411 11:02:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.411 11:02:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.411 11:02:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.411 11:02:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.411 11:02:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.411 11:02:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.411 11:02:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.411 11:02:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.411 11:02:54 -- accel/accel.sh@42 -- # jq -r . 00:06:43.411 [2024-12-06 11:02:54.196490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.411 [2024-12-06 11:02:54.196602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68303 ] 00:06:43.411 [2024-12-06 11:02:54.327021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.411 [2024-12-06 11:02:54.359011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=0x1 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=copy 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=software 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=32 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=32 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=1 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.411 11:02:54 -- accel/accel.sh@21 -- # val=Yes 00:06:43.411 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.411 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.412 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.412 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.412 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.412 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.412 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.412 11:02:54 -- accel/accel.sh@21 -- # val= 00:06:43.412 11:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.412 11:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.412 11:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@21 -- # val= 00:06:44.349 11:02:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:44.349 11:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:44.349 11:02:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.349 11:02:55 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:44.349 11:02:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.349 00:06:44.349 real 0m2.603s 00:06:44.349 user 0m2.267s 00:06:44.349 sys 0m0.134s 00:06:44.349 11:02:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.349 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.349 ************************************ 00:06:44.349 END TEST accel_copy 00:06:44.349 ************************************ 00:06:44.609 11:02:55 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.609 11:02:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:44.609 11:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.609 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.609 ************************************ 00:06:44.609 START TEST accel_fill 00:06:44.609 ************************************ 00:06:44.609 11:02:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.609 11:02:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.609 11:02:55 -- accel/accel.sh@17 -- # local accel_module 00:06:44.609 11:02:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.609 11:02:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.609 11:02:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.609 11:02:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.609 11:02:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.609 11:02:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.609 11:02:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.609 11:02:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.609 11:02:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.609 11:02:55 -- accel/accel.sh@42 -- # jq -r . 00:06:44.609 [2024-12-06 11:02:55.549959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.609 [2024-12-06 11:02:55.550059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68338 ] 00:06:44.609 [2024-12-06 11:02:55.688258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.609 [2024-12-06 11:02:55.717572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.984 11:02:56 -- accel/accel.sh@18 -- # out=' 00:06:45.984 SPDK Configuration: 00:06:45.984 Core mask: 0x1 00:06:45.984 00:06:45.984 Accel Perf Configuration: 00:06:45.984 Workload Type: fill 00:06:45.984 Fill pattern: 0x80 00:06:45.984 Transfer size: 4096 bytes 00:06:45.984 Vector count 1 00:06:45.984 Module: software 00:06:45.984 Queue depth: 64 00:06:45.984 Allocate depth: 64 00:06:45.984 # threads/core: 1 00:06:45.984 Run time: 1 seconds 00:06:45.984 Verify: Yes 00:06:45.984 00:06:45.984 Running for 1 seconds... 00:06:45.984 00:06:45.984 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.984 ------------------------------------------------------------------------------------ 00:06:45.984 0,0 544128/s 2125 MiB/s 0 0 00:06:45.984 ==================================================================================== 00:06:45.984 Total 544128/s 2125 MiB/s 0 0' 00:06:45.984 11:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:45.984 11:02:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:45.984 11:02:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:45.984 11:02:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.984 11:02:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.984 11:02:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.984 11:02:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.984 11:02:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.984 11:02:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.984 11:02:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.984 11:02:56 -- accel/accel.sh@42 -- # jq -r . 00:06:45.984 [2024-12-06 11:02:56.862050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.984 [2024-12-06 11:02:56.862142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68359 ] 00:06:45.984 [2024-12-06 11:02:56.999627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.984 [2024-12-06 11:02:57.028784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.984 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.984 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.984 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.984 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.984 11:02:57 -- accel/accel.sh@21 -- # val=0x1 00:06:45.984 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.984 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.984 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.984 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.984 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.984 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=fill 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=0x80 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=software 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=64 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=64 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=1 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val=Yes 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.985 11:02:57 -- accel/accel.sh@21 -- # val= 00:06:45.985 11:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.985 11:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 11:02:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 11:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 11:02:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.363 11:02:58 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:47.363 11:02:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.363 00:06:47.363 real 0m2.619s 00:06:47.363 user 0m2.287s 00:06:47.363 sys 0m0.133s 00:06:47.363 11:02:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.363 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 ************************************ 00:06:47.363 END TEST accel_fill 00:06:47.363 ************************************ 00:06:47.363 11:02:58 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:47.363 11:02:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.363 11:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.363 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 ************************************ 00:06:47.363 START TEST accel_copy_crc32c 00:06:47.363 ************************************ 00:06:47.363 11:02:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:47.363 11:02:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.363 11:02:58 -- accel/accel.sh@17 -- # local accel_module 00:06:47.363 11:02:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:47.363 11:02:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:47.363 11:02:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.363 11:02:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.363 11:02:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.363 11:02:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.363 11:02:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.363 11:02:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.363 11:02:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.363 11:02:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.363 [2024-12-06 11:02:58.229561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.363 [2024-12-06 11:02:58.229657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68388 ] 00:06:47.363 [2024-12-06 11:02:58.368154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.363 [2024-12-06 11:02:58.400327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.767 11:02:59 -- accel/accel.sh@18 -- # out=' 00:06:48.767 SPDK Configuration: 00:06:48.767 Core mask: 0x1 00:06:48.767 00:06:48.767 Accel Perf Configuration: 00:06:48.767 Workload Type: copy_crc32c 00:06:48.767 CRC-32C seed: 0 00:06:48.767 Vector size: 4096 bytes 00:06:48.767 Transfer size: 4096 bytes 00:06:48.767 Vector count 1 00:06:48.767 Module: software 00:06:48.767 Queue depth: 32 00:06:48.767 Allocate depth: 32 00:06:48.767 # threads/core: 1 00:06:48.767 Run time: 1 seconds 00:06:48.767 Verify: Yes 00:06:48.767 00:06:48.767 Running for 1 seconds... 00:06:48.767 00:06:48.767 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.767 ------------------------------------------------------------------------------------ 00:06:48.767 0,0 290240/s 1133 MiB/s 0 0 00:06:48.767 ==================================================================================== 00:06:48.767 Total 290240/s 1133 MiB/s 0 0' 00:06:48.767 11:02:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.767 11:02:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.767 11:02:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.767 11:02:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.767 11:02:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.767 11:02:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.767 11:02:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.767 11:02:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.767 11:02:59 -- accel/accel.sh@42 -- # jq -r . 00:06:48.767 [2024-12-06 11:02:59.544576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.767 [2024-12-06 11:02:59.544666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68410 ] 00:06:48.767 [2024-12-06 11:02:59.677026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.767 [2024-12-06 11:02:59.706395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val=0x1 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val=0 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.767 11:02:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.767 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.767 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val=software 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val=32 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val=32 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val=1 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val=Yes 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.768 11:02:59 -- accel/accel.sh@21 -- # val= 00:06:48.768 11:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.768 11:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@21 -- # val= 00:06:49.707 11:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.707 11:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.707 11:03:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.707 11:03:00 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:49.707 11:03:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.707 00:06:49.707 real 0m2.627s 00:06:49.707 user 0m2.284s 00:06:49.707 sys 0m0.141s 00:06:49.707 11:03:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.707 ************************************ 00:06:49.707 END TEST accel_copy_crc32c 00:06:49.707 ************************************ 00:06:49.707 11:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:49.966 11:03:00 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.966 11:03:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:49.966 11:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.966 11:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:49.966 ************************************ 00:06:49.966 START TEST accel_copy_crc32c_C2 00:06:49.966 ************************************ 00:06:49.966 11:03:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.966 11:03:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.966 11:03:00 -- accel/accel.sh@17 -- # local accel_module 00:06:49.966 11:03:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:49.966 11:03:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:49.966 11:03:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.966 11:03:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.966 11:03:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.966 11:03:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.966 11:03:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.966 11:03:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.966 11:03:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.966 11:03:00 -- accel/accel.sh@42 -- # jq -r . 00:06:49.966 [2024-12-06 11:03:00.908521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.966 [2024-12-06 11:03:00.908640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68444 ] 00:06:49.966 [2024-12-06 11:03:01.044946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.966 [2024-12-06 11:03:01.075426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.346 11:03:02 -- accel/accel.sh@18 -- # out=' 00:06:51.346 SPDK Configuration: 00:06:51.346 Core mask: 0x1 00:06:51.346 00:06:51.346 Accel Perf Configuration: 00:06:51.346 Workload Type: copy_crc32c 00:06:51.346 CRC-32C seed: 0 00:06:51.346 Vector size: 4096 bytes 00:06:51.346 Transfer size: 8192 bytes 00:06:51.346 Vector count 2 00:06:51.346 Module: software 00:06:51.346 Queue depth: 32 00:06:51.346 Allocate depth: 32 00:06:51.346 # threads/core: 1 00:06:51.346 Run time: 1 seconds 00:06:51.346 Verify: Yes 00:06:51.346 00:06:51.346 Running for 1 seconds... 00:06:51.346 00:06:51.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.346 ------------------------------------------------------------------------------------ 00:06:51.346 0,0 200512/s 1566 MiB/s 0 0 00:06:51.346 ==================================================================================== 00:06:51.346 Total 200512/s 783 MiB/s 0 0' 00:06:51.346 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.346 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.346 11:03:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:51.346 11:03:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:51.346 11:03:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.346 11:03:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.346 11:03:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.346 11:03:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.346 11:03:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.346 11:03:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.346 11:03:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.346 11:03:02 -- accel/accel.sh@42 -- # jq -r . 00:06:51.346 [2024-12-06 11:03:02.221288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.346 [2024-12-06 11:03:02.221377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68458 ] 00:06:51.346 [2024-12-06 11:03:02.358772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.346 [2024-12-06 11:03:02.389671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.346 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.346 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.346 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=0x1 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=0 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=software 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=32 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=32 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=1 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val=Yes 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.347 11:03:02 -- accel/accel.sh@21 -- # val= 00:06:51.347 11:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.347 11:03:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@21 -- # val= 00:06:52.726 11:03:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.726 11:03:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.726 11:03:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.726 11:03:03 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:52.726 11:03:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.726 00:06:52.726 real 0m2.637s 00:06:52.726 user 0m2.303s 00:06:52.726 sys 0m0.137s 00:06:52.726 11:03:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.726 ************************************ 00:06:52.726 END TEST accel_copy_crc32c_C2 00:06:52.726 ************************************ 00:06:52.726 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:52.726 11:03:03 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:52.726 11:03:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.726 11:03:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.726 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:52.726 ************************************ 00:06:52.726 START TEST accel_dualcast 00:06:52.726 ************************************ 00:06:52.726 11:03:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:52.726 11:03:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.726 11:03:03 -- accel/accel.sh@17 -- # local accel_module 00:06:52.726 11:03:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:52.726 11:03:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:52.726 11:03:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.726 11:03:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.726 11:03:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.726 11:03:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.726 11:03:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.726 11:03:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.726 11:03:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.726 11:03:03 -- accel/accel.sh@42 -- # jq -r . 00:06:52.726 [2024-12-06 11:03:03.590407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.726 [2024-12-06 11:03:03.590523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68493 ] 00:06:52.726 [2024-12-06 11:03:03.726399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.726 [2024-12-06 11:03:03.756985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.107 11:03:04 -- accel/accel.sh@18 -- # out=' 00:06:54.107 SPDK Configuration: 00:06:54.107 Core mask: 0x1 00:06:54.107 00:06:54.107 Accel Perf Configuration: 00:06:54.107 Workload Type: dualcast 00:06:54.107 Transfer size: 4096 bytes 00:06:54.107 Vector count 1 00:06:54.107 Module: software 00:06:54.107 Queue depth: 32 00:06:54.107 Allocate depth: 32 00:06:54.107 # threads/core: 1 00:06:54.107 Run time: 1 seconds 00:06:54.107 Verify: Yes 00:06:54.107 00:06:54.107 Running for 1 seconds... 00:06:54.107 00:06:54.107 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.107 ------------------------------------------------------------------------------------ 00:06:54.107 0,0 407232/s 1590 MiB/s 0 0 00:06:54.107 ==================================================================================== 00:06:54.107 Total 407232/s 1590 MiB/s 0 0' 00:06:54.107 11:03:04 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:54.107 11:03:04 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:54.107 11:03:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.107 11:03:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.107 11:03:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.107 11:03:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.107 11:03:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.107 11:03:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.107 11:03:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.107 11:03:04 -- accel/accel.sh@42 -- # jq -r . 00:06:54.107 [2024-12-06 11:03:04.895619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.107 [2024-12-06 11:03:04.895708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68507 ] 00:06:54.107 [2024-12-06 11:03:05.033414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.107 [2024-12-06 11:03:05.063195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=0x1 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=dualcast 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=software 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=32 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=32 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=1 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val=Yes 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.107 11:03:05 -- accel/accel.sh@21 -- # val= 00:06:54.107 11:03:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.107 11:03:05 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@21 -- # val= 00:06:55.043 11:03:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.043 11:03:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.043 11:03:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.043 11:03:06 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:55.043 11:03:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.043 00:06:55.043 real 0m2.614s 00:06:55.043 user 0m2.281s 00:06:55.043 sys 0m0.133s 00:06:55.043 11:03:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.043 ************************************ 00:06:55.043 END TEST accel_dualcast 00:06:55.043 ************************************ 00:06:55.043 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.302 11:03:06 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:55.303 11:03:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:55.303 11:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.303 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.303 ************************************ 00:06:55.303 START TEST accel_compare 00:06:55.303 ************************************ 00:06:55.303 11:03:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:55.303 11:03:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.303 11:03:06 -- accel/accel.sh@17 -- # local accel_module 00:06:55.303 11:03:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:55.303 11:03:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:55.303 11:03:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.303 11:03:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.303 11:03:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.303 11:03:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.303 11:03:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.303 11:03:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.303 11:03:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.303 11:03:06 -- accel/accel.sh@42 -- # jq -r . 00:06:55.303 [2024-12-06 11:03:06.258472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.303 [2024-12-06 11:03:06.258604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68541 ] 00:06:55.303 [2024-12-06 11:03:06.395309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.303 [2024-12-06 11:03:06.424717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.681 11:03:07 -- accel/accel.sh@18 -- # out=' 00:06:56.681 SPDK Configuration: 00:06:56.681 Core mask: 0x1 00:06:56.681 00:06:56.681 Accel Perf Configuration: 00:06:56.681 Workload Type: compare 00:06:56.681 Transfer size: 4096 bytes 00:06:56.681 Vector count 1 00:06:56.681 Module: software 00:06:56.681 Queue depth: 32 00:06:56.681 Allocate depth: 32 00:06:56.681 # threads/core: 1 00:06:56.681 Run time: 1 seconds 00:06:56.681 Verify: Yes 00:06:56.681 00:06:56.681 Running for 1 seconds... 00:06:56.681 00:06:56.681 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.681 ------------------------------------------------------------------------------------ 00:06:56.681 0,0 534400/s 2087 MiB/s 0 0 00:06:56.681 ==================================================================================== 00:06:56.681 Total 534400/s 2087 MiB/s 0 0' 00:06:56.681 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.681 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.681 11:03:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:56.681 11:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.681 11:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.681 11:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.681 11:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.681 11:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.681 11:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.681 11:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.681 11:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.681 11:03:07 -- accel/accel.sh@42 -- # jq -r . 00:06:56.681 [2024-12-06 11:03:07.562723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.681 [2024-12-06 11:03:07.562820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68555 ] 00:06:56.682 [2024-12-06 11:03:07.700257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.682 [2024-12-06 11:03:07.729907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=0x1 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=compare 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=software 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=32 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=32 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=1 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val=Yes 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.682 11:03:07 -- accel/accel.sh@21 -- # val= 00:06:56.682 11:03:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.682 11:03:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@21 -- # val= 00:06:58.060 11:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 11:03:08 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 11:03:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.060 11:03:08 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:58.060 11:03:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.060 00:06:58.060 real 0m2.613s 00:06:58.060 user 0m2.276s 00:06:58.060 sys 0m0.139s 00:06:58.060 11:03:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.060 ************************************ 00:06:58.060 END TEST accel_compare 00:06:58.060 ************************************ 00:06:58.060 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:58.060 11:03:08 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:58.060 11:03:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:58.060 11:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.060 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:58.060 ************************************ 00:06:58.060 START TEST accel_xor 00:06:58.060 ************************************ 00:06:58.060 11:03:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:58.060 11:03:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.060 11:03:08 -- accel/accel.sh@17 -- # local accel_module 00:06:58.060 11:03:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:58.060 11:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:58.060 11:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.060 11:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.060 11:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.060 11:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.060 11:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.060 11:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.060 11:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.060 11:03:08 -- accel/accel.sh@42 -- # jq -r . 00:06:58.060 [2024-12-06 11:03:08.921114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.060 [2024-12-06 11:03:08.921204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68590 ] 00:06:58.060 [2024-12-06 11:03:09.055009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.060 [2024-12-06 11:03:09.084388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.437 11:03:10 -- accel/accel.sh@18 -- # out=' 00:06:59.437 SPDK Configuration: 00:06:59.437 Core mask: 0x1 00:06:59.437 00:06:59.437 Accel Perf Configuration: 00:06:59.437 Workload Type: xor 00:06:59.437 Source buffers: 2 00:06:59.437 Transfer size: 4096 bytes 00:06:59.437 Vector count 1 00:06:59.437 Module: software 00:06:59.437 Queue depth: 32 00:06:59.437 Allocate depth: 32 00:06:59.437 # threads/core: 1 00:06:59.437 Run time: 1 seconds 00:06:59.437 Verify: Yes 00:06:59.437 00:06:59.437 Running for 1 seconds... 00:06:59.437 00:06:59.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.437 ------------------------------------------------------------------------------------ 00:06:59.437 0,0 282432/s 1103 MiB/s 0 0 00:06:59.437 ==================================================================================== 00:06:59.437 Total 282432/s 1103 MiB/s 0 0' 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:59.437 11:03:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.437 11:03:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.437 11:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.437 11:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.437 11:03:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.437 11:03:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.437 11:03:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.437 11:03:10 -- accel/accel.sh@42 -- # jq -r . 00:06:59.437 [2024-12-06 11:03:10.224437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.437 [2024-12-06 11:03:10.224526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68604 ] 00:06:59.437 [2024-12-06 11:03:10.359593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.437 [2024-12-06 11:03:10.388765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=0x1 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=xor 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=2 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=software 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=32 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=32 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=1 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val=Yes 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.437 11:03:10 -- accel/accel.sh@21 -- # val= 00:06:59.437 11:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.437 11:03:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@21 -- # val= 00:07:00.372 11:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 11:03:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 11:03:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.372 11:03:11 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:00.372 11:03:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.372 00:07:00.372 real 0m2.610s 00:07:00.372 user 0m2.279s 00:07:00.372 sys 0m0.133s 00:07:00.372 11:03:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.372 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.372 ************************************ 00:07:00.372 END TEST accel_xor 00:07:00.372 ************************************ 00:07:00.631 11:03:11 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:00.631 11:03:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:00.631 11:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.631 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.631 ************************************ 00:07:00.631 START TEST accel_xor 00:07:00.631 ************************************ 00:07:00.631 11:03:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:00.631 11:03:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.631 11:03:11 -- accel/accel.sh@17 -- # local accel_module 00:07:00.631 11:03:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:00.631 11:03:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:00.631 11:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.631 11:03:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.631 11:03:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.631 11:03:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.631 11:03:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.631 11:03:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.631 11:03:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.631 11:03:11 -- accel/accel.sh@42 -- # jq -r . 00:07:00.631 [2024-12-06 11:03:11.581511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.631 [2024-12-06 11:03:11.581618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68638 ] 00:07:00.631 [2024-12-06 11:03:11.717867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.631 [2024-12-06 11:03:11.747005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.005 11:03:12 -- accel/accel.sh@18 -- # out=' 00:07:02.005 SPDK Configuration: 00:07:02.005 Core mask: 0x1 00:07:02.005 00:07:02.005 Accel Perf Configuration: 00:07:02.005 Workload Type: xor 00:07:02.005 Source buffers: 3 00:07:02.006 Transfer size: 4096 bytes 00:07:02.006 Vector count 1 00:07:02.006 Module: software 00:07:02.006 Queue depth: 32 00:07:02.006 Allocate depth: 32 00:07:02.006 # threads/core: 1 00:07:02.006 Run time: 1 seconds 00:07:02.006 Verify: Yes 00:07:02.006 00:07:02.006 Running for 1 seconds... 00:07:02.006 00:07:02.006 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.006 ------------------------------------------------------------------------------------ 00:07:02.006 0,0 271168/s 1059 MiB/s 0 0 00:07:02.006 ==================================================================================== 00:07:02.006 Total 271168/s 1059 MiB/s 0 0' 00:07:02.006 11:03:12 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:12 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.006 11:03:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:02.006 11:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.006 11:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.006 11:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.006 11:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.006 11:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.006 11:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.006 11:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.006 11:03:12 -- accel/accel.sh@42 -- # jq -r . 00:07:02.006 [2024-12-06 11:03:12.886251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.006 [2024-12-06 11:03:12.886341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68658 ] 00:07:02.006 [2024-12-06 11:03:13.023591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.006 [2024-12-06 11:03:13.052814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=0x1 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=xor 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=3 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=software 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=32 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=32 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=1 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val=Yes 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.006 11:03:13 -- accel/accel.sh@21 -- # val= 00:07:02.006 11:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.006 11:03:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@21 -- # val= 00:07:03.382 11:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.382 11:03:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.382 11:03:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.382 11:03:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:03.382 11:03:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.382 00:07:03.382 real 0m2.617s 00:07:03.382 user 0m2.281s 00:07:03.382 sys 0m0.136s 00:07:03.382 ************************************ 00:07:03.383 END TEST accel_xor 00:07:03.383 ************************************ 00:07:03.383 11:03:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.383 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.383 11:03:14 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:03.383 11:03:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:03.383 11:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.383 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.383 ************************************ 00:07:03.383 START TEST accel_dif_verify 00:07:03.383 ************************************ 00:07:03.383 11:03:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:03.383 11:03:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.383 11:03:14 -- accel/accel.sh@17 -- # local accel_module 00:07:03.383 11:03:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:03.383 11:03:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.383 11:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.383 11:03:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.383 11:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.383 11:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.383 11:03:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.383 11:03:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.383 11:03:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.383 11:03:14 -- accel/accel.sh@42 -- # jq -r . 00:07:03.383 [2024-12-06 11:03:14.251209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.383 [2024-12-06 11:03:14.251282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68687 ] 00:07:03.383 [2024-12-06 11:03:14.382784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.383 [2024-12-06 11:03:14.412347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.760 11:03:15 -- accel/accel.sh@18 -- # out=' 00:07:04.760 SPDK Configuration: 00:07:04.760 Core mask: 0x1 00:07:04.760 00:07:04.760 Accel Perf Configuration: 00:07:04.760 Workload Type: dif_verify 00:07:04.760 Vector size: 4096 bytes 00:07:04.760 Transfer size: 4096 bytes 00:07:04.760 Block size: 512 bytes 00:07:04.760 Metadata size: 8 bytes 00:07:04.760 Vector count 1 00:07:04.760 Module: software 00:07:04.760 Queue depth: 32 00:07:04.760 Allocate depth: 32 00:07:04.760 # threads/core: 1 00:07:04.760 Run time: 1 seconds 00:07:04.760 Verify: No 00:07:04.760 00:07:04.760 Running for 1 seconds... 00:07:04.760 00:07:04.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.760 ------------------------------------------------------------------------------------ 00:07:04.760 0,0 117312/s 465 MiB/s 0 0 00:07:04.760 ==================================================================================== 00:07:04.760 Total 117312/s 458 MiB/s 0 0' 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.760 11:03:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:04.760 11:03:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.760 11:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.760 11:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.760 11:03:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.760 11:03:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.760 11:03:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.760 11:03:15 -- accel/accel.sh@42 -- # jq -r . 00:07:04.760 [2024-12-06 11:03:15.561774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.760 [2024-12-06 11:03:15.561866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68706 ] 00:07:04.760 [2024-12-06 11:03:15.697062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.760 [2024-12-06 11:03:15.726462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=0x1 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=dif_verify 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=software 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=32 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=32 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=1 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val=No 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.760 11:03:15 -- accel/accel.sh@21 -- # val= 00:07:04.760 11:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.760 11:03:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@21 -- # val= 00:07:06.139 11:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.139 11:03:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.139 11:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.139 11:03:16 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:06.139 11:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.139 00:07:06.139 real 0m2.622s 00:07:06.139 user 0m2.294s 00:07:06.139 sys 0m0.127s 00:07:06.139 11:03:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.139 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:07:06.139 ************************************ 00:07:06.139 END TEST accel_dif_verify 00:07:06.139 ************************************ 00:07:06.139 11:03:16 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:06.139 11:03:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:06.139 11:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.139 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:07:06.139 ************************************ 00:07:06.139 START TEST accel_dif_generate 00:07:06.139 ************************************ 00:07:06.139 11:03:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:06.139 11:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.139 11:03:16 -- accel/accel.sh@17 -- # local accel_module 00:07:06.139 11:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:06.139 11:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:06.139 11:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.139 11:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.139 11:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.139 11:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.139 11:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.139 11:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.139 11:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.139 11:03:16 -- accel/accel.sh@42 -- # jq -r . 00:07:06.139 [2024-12-06 11:03:16.931278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.139 [2024-12-06 11:03:16.931517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:07:06.139 [2024-12-06 11:03:17.068546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.139 [2024-12-06 11:03:17.098902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.518 11:03:18 -- accel/accel.sh@18 -- # out=' 00:07:07.518 SPDK Configuration: 00:07:07.518 Core mask: 0x1 00:07:07.518 00:07:07.518 Accel Perf Configuration: 00:07:07.518 Workload Type: dif_generate 00:07:07.518 Vector size: 4096 bytes 00:07:07.518 Transfer size: 4096 bytes 00:07:07.518 Block size: 512 bytes 00:07:07.518 Metadata size: 8 bytes 00:07:07.518 Vector count 1 00:07:07.518 Module: software 00:07:07.518 Queue depth: 32 00:07:07.518 Allocate depth: 32 00:07:07.518 # threads/core: 1 00:07:07.518 Run time: 1 seconds 00:07:07.518 Verify: No 00:07:07.518 00:07:07.518 Running for 1 seconds... 00:07:07.518 00:07:07.518 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.518 ------------------------------------------------------------------------------------ 00:07:07.518 0,0 142464/s 565 MiB/s 0 0 00:07:07.518 ==================================================================================== 00:07:07.518 Total 142464/s 556 MiB/s 0 0' 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.518 11:03:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:07.518 11:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:07.518 11:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.518 11:03:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.518 11:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.518 11:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.518 11:03:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.518 11:03:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.518 11:03:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.518 11:03:18 -- accel/accel.sh@42 -- # jq -r . 00:07:07.518 [2024-12-06 11:03:18.246763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.518 [2024-12-06 11:03:18.246876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68755 ] 00:07:07.518 [2024-12-06 11:03:18.383972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.518 [2024-12-06 11:03:18.413362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.518 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.518 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.518 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.518 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.518 11:03:18 -- accel/accel.sh@21 -- # val=0x1 00:07:07.518 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.518 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.518 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.518 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.518 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=dif_generate 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=software 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=32 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=32 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=1 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val=No 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.519 11:03:18 -- accel/accel.sh@21 -- # val= 00:07:07.519 11:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.519 11:03:18 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@21 -- # val= 00:07:08.457 11:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.457 11:03:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.457 11:03:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.457 11:03:19 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:08.457 11:03:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.457 00:07:08.457 real 0m2.626s 00:07:08.457 user 0m2.284s 00:07:08.457 sys 0m0.143s 00:07:08.457 11:03:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.457 ************************************ 00:07:08.457 END TEST accel_dif_generate 00:07:08.457 ************************************ 00:07:08.457 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.457 11:03:19 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:08.457 11:03:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:08.457 11:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.457 11:03:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.457 ************************************ 00:07:08.457 START TEST accel_dif_generate_copy 00:07:08.457 ************************************ 00:07:08.457 11:03:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:08.457 11:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.457 11:03:19 -- accel/accel.sh@17 -- # local accel_module 00:07:08.457 11:03:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:08.457 11:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:08.457 11:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.457 11:03:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.457 11:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.457 11:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.457 11:03:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.457 11:03:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.457 11:03:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.457 11:03:19 -- accel/accel.sh@42 -- # jq -r . 00:07:08.717 [2024-12-06 11:03:19.605261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.717 [2024-12-06 11:03:19.605371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68784 ] 00:07:08.717 [2024-12-06 11:03:19.743239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.717 [2024-12-06 11:03:19.772822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.098 11:03:20 -- accel/accel.sh@18 -- # out=' 00:07:10.098 SPDK Configuration: 00:07:10.098 Core mask: 0x1 00:07:10.098 00:07:10.098 Accel Perf Configuration: 00:07:10.098 Workload Type: dif_generate_copy 00:07:10.098 Vector size: 4096 bytes 00:07:10.098 Transfer size: 4096 bytes 00:07:10.098 Vector count 1 00:07:10.098 Module: software 00:07:10.098 Queue depth: 32 00:07:10.098 Allocate depth: 32 00:07:10.098 # threads/core: 1 00:07:10.098 Run time: 1 seconds 00:07:10.098 Verify: No 00:07:10.098 00:07:10.098 Running for 1 seconds... 00:07:10.098 00:07:10.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.098 ------------------------------------------------------------------------------------ 00:07:10.098 0,0 108032/s 428 MiB/s 0 0 00:07:10.098 ==================================================================================== 00:07:10.098 Total 108032/s 422 MiB/s 0 0' 00:07:10.098 11:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:10.098 11:03:20 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:20 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:10.098 11:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.098 11:03:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.098 11:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.098 11:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.098 11:03:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.098 11:03:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.098 11:03:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.098 11:03:20 -- accel/accel.sh@42 -- # jq -r . 00:07:10.098 [2024-12-06 11:03:20.905588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.098 [2024-12-06 11:03:20.905686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68809 ] 00:07:10.098 [2024-12-06 11:03:21.036215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.098 [2024-12-06 11:03:21.065397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=0x1 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=software 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=32 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=32 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=1 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val=No 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.098 11:03:21 -- accel/accel.sh@21 -- # val= 00:07:10.098 11:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.098 11:03:21 -- accel/accel.sh@20 -- # read -r var val 00:07:11.034 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.034 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.034 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.034 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.034 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.034 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.035 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.035 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.035 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.035 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.035 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.035 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.294 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.294 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.294 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.294 11:03:22 -- accel/accel.sh@21 -- # val= 00:07:11.294 11:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.294 11:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.294 11:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.294 11:03:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.294 11:03:22 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:11.294 11:03:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.294 00:07:11.294 real 0m2.599s 00:07:11.294 user 0m2.278s 00:07:11.294 sys 0m0.122s 00:07:11.294 11:03:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.294 11:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.294 ************************************ 00:07:11.294 END TEST accel_dif_generate_copy 00:07:11.294 ************************************ 00:07:11.294 11:03:22 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:11.294 11:03:22 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.294 11:03:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:11.294 11:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.294 11:03:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.294 ************************************ 00:07:11.294 START TEST accel_comp 00:07:11.294 ************************************ 00:07:11.294 11:03:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.294 11:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.294 11:03:22 -- accel/accel.sh@17 -- # local accel_module 00:07:11.294 11:03:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.294 11:03:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.294 11:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.294 11:03:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.294 11:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.294 11:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.294 11:03:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.294 11:03:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.294 11:03:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.294 11:03:22 -- accel/accel.sh@42 -- # jq -r . 00:07:11.294 [2024-12-06 11:03:22.261845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.294 [2024-12-06 11:03:22.261945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68838 ] 00:07:11.294 [2024-12-06 11:03:22.393513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.294 [2024-12-06 11:03:22.427656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.673 11:03:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.673 00:07:12.673 SPDK Configuration: 00:07:12.673 Core mask: 0x1 00:07:12.673 00:07:12.673 Accel Perf Configuration: 00:07:12.673 Workload Type: compress 00:07:12.673 Transfer size: 4096 bytes 00:07:12.673 Vector count 1 00:07:12.673 Module: software 00:07:12.673 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.673 Queue depth: 32 00:07:12.673 Allocate depth: 32 00:07:12.673 # threads/core: 1 00:07:12.673 Run time: 1 seconds 00:07:12.673 Verify: No 00:07:12.673 00:07:12.673 Running for 1 seconds... 00:07:12.673 00:07:12.673 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.673 ------------------------------------------------------------------------------------ 00:07:12.673 0,0 47744/s 199 MiB/s 0 0 00:07:12.673 ==================================================================================== 00:07:12.673 Total 47744/s 186 MiB/s 0 0' 00:07:12.673 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.673 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.673 11:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.673 11:03:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.673 11:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.673 11:03:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.673 11:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.673 11:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.673 11:03:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.673 11:03:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.673 11:03:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.673 11:03:23 -- accel/accel.sh@42 -- # jq -r . 00:07:12.674 [2024-12-06 11:03:23.588361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.674 [2024-12-06 11:03:23.588766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68852 ] 00:07:12.674 [2024-12-06 11:03:23.729473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.674 [2024-12-06 11:03:23.762810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=0x1 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=compress 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=software 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=32 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=32 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=1 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val=No 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.674 11:03:23 -- accel/accel.sh@21 -- # val= 00:07:12.674 11:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.674 11:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@21 -- # val= 00:07:14.059 11:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:14.059 11:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.059 11:03:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.059 11:03:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:14.059 11:03:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.059 00:07:14.059 real 0m2.658s 00:07:14.059 user 0m2.297s 00:07:14.059 sys 0m0.151s 00:07:14.059 11:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.059 11:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:14.059 ************************************ 00:07:14.059 END TEST accel_comp 00:07:14.059 ************************************ 00:07:14.059 11:03:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.059 11:03:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:14.059 11:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.059 11:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:14.059 ************************************ 00:07:14.059 START TEST accel_decomp 00:07:14.059 ************************************ 00:07:14.059 11:03:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.059 11:03:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.059 11:03:24 -- accel/accel.sh@17 -- # local accel_module 00:07:14.059 11:03:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.059 11:03:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.059 11:03:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.059 11:03:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.059 11:03:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.059 11:03:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.059 11:03:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.059 11:03:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.059 11:03:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.059 11:03:24 -- accel/accel.sh@42 -- # jq -r . 00:07:14.059 [2024-12-06 11:03:24.975996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.059 [2024-12-06 11:03:24.976153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:07:14.059 [2024-12-06 11:03:25.113447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.059 [2024-12-06 11:03:25.147795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.466 11:03:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.466 00:07:15.466 SPDK Configuration: 00:07:15.466 Core mask: 0x1 00:07:15.466 00:07:15.466 Accel Perf Configuration: 00:07:15.466 Workload Type: decompress 00:07:15.466 Transfer size: 4096 bytes 00:07:15.466 Vector count 1 00:07:15.466 Module: software 00:07:15.466 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.466 Queue depth: 32 00:07:15.466 Allocate depth: 32 00:07:15.466 # threads/core: 1 00:07:15.466 Run time: 1 seconds 00:07:15.466 Verify: Yes 00:07:15.466 00:07:15.466 Running for 1 seconds... 00:07:15.466 00:07:15.466 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.466 ------------------------------------------------------------------------------------ 00:07:15.466 0,0 75840/s 139 MiB/s 0 0 00:07:15.466 ==================================================================================== 00:07:15.466 Total 75840/s 296 MiB/s 0 0' 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.466 11:03:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.466 11:03:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.466 11:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.466 11:03:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.466 11:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.466 11:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.466 11:03:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.466 11:03:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.466 11:03:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.466 11:03:26 -- accel/accel.sh@42 -- # jq -r . 00:07:15.466 [2024-12-06 11:03:26.300847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.466 [2024-12-06 11:03:26.300957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:07:15.466 [2024-12-06 11:03:26.439182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.466 [2024-12-06 11:03:26.470707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.466 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.466 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.466 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.466 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.466 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.466 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.466 11:03:26 -- accel/accel.sh@21 -- # val=0x1 00:07:15.466 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.466 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.466 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.466 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=decompress 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=software 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=32 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=32 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=1 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val=Yes 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.467 11:03:26 -- accel/accel.sh@21 -- # val= 00:07:15.467 11:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.467 11:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@21 -- # val= 00:07:16.851 11:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.851 11:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.851 11:03:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.851 11:03:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.851 11:03:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.851 00:07:16.851 real 0m2.654s 00:07:16.851 user 0m2.305s 00:07:16.851 sys 0m0.142s 00:07:16.851 11:03:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.851 11:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:16.851 ************************************ 00:07:16.851 END TEST accel_decomp 00:07:16.851 ************************************ 00:07:16.851 11:03:27 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.851 11:03:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:16.851 11:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.851 11:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:16.851 ************************************ 00:07:16.851 START TEST accel_decmop_full 00:07:16.851 ************************************ 00:07:16.851 11:03:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.851 11:03:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.851 11:03:27 -- accel/accel.sh@17 -- # local accel_module 00:07:16.851 11:03:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.851 11:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.851 11:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.851 11:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.851 11:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.851 11:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.851 11:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.851 11:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.851 11:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.851 11:03:27 -- accel/accel.sh@42 -- # jq -r . 00:07:16.851 [2024-12-06 11:03:27.690626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.851 [2024-12-06 11:03:27.690766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68935 ] 00:07:16.851 [2024-12-06 11:03:27.830451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.851 [2024-12-06 11:03:27.862982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.229 11:03:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.229 00:07:18.229 SPDK Configuration: 00:07:18.229 Core mask: 0x1 00:07:18.229 00:07:18.229 Accel Perf Configuration: 00:07:18.229 Workload Type: decompress 00:07:18.229 Transfer size: 111250 bytes 00:07:18.229 Vector count 1 00:07:18.229 Module: software 00:07:18.229 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.229 Queue depth: 32 00:07:18.229 Allocate depth: 32 00:07:18.229 # threads/core: 1 00:07:18.229 Run time: 1 seconds 00:07:18.229 Verify: Yes 00:07:18.229 00:07:18.229 Running for 1 seconds... 00:07:18.229 00:07:18.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.229 ------------------------------------------------------------------------------------ 00:07:18.229 0,0 5152/s 212 MiB/s 0 0 00:07:18.229 ==================================================================================== 00:07:18.229 Total 5152/s 546 MiB/s 0 0' 00:07:18.229 11:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.229 11:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.229 11:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.229 11:03:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.229 11:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.229 11:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.229 11:03:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.229 11:03:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.229 11:03:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.229 11:03:28 -- accel/accel.sh@42 -- # jq -r . 00:07:18.229 [2024-12-06 11:03:29.017531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.229 [2024-12-06 11:03:29.017652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68949 ] 00:07:18.229 [2024-12-06 11:03:29.156675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.229 [2024-12-06 11:03:29.189389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val=0x1 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val=decompress 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.229 11:03:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:18.229 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.229 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=software 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=32 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=32 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=1 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val=Yes 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.230 11:03:29 -- accel/accel.sh@21 -- # val= 00:07:18.230 11:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.230 11:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 11:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 11:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 11:03:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.606 11:03:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.606 11:03:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.606 00:07:19.606 real 0m2.667s 00:07:19.606 user 0m1.160s 00:07:19.606 sys 0m0.070s 00:07:19.606 11:03:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.606 ************************************ 00:07:19.606 END TEST accel_decmop_full 00:07:19.606 ************************************ 00:07:19.606 11:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.606 11:03:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.606 11:03:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:19.606 11:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.606 11:03:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.606 ************************************ 00:07:19.606 START TEST accel_decomp_mcore 00:07:19.606 ************************************ 00:07:19.606 11:03:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.606 11:03:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.606 11:03:30 -- accel/accel.sh@17 -- # local accel_module 00:07:19.606 11:03:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.606 11:03:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.606 11:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.606 11:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.606 11:03:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.606 11:03:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.606 11:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.606 11:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.606 11:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.606 11:03:30 -- accel/accel.sh@42 -- # jq -r . 00:07:19.606 [2024-12-06 11:03:30.403225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.606 [2024-12-06 11:03:30.403574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68989 ] 00:07:19.606 [2024-12-06 11:03:30.536251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.606 [2024-12-06 11:03:30.573356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.606 [2024-12-06 11:03:30.573503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.606 [2024-12-06 11:03:30.573654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.606 [2024-12-06 11:03:30.573972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.981 11:03:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.981 00:07:20.981 SPDK Configuration: 00:07:20.981 Core mask: 0xf 00:07:20.981 00:07:20.981 Accel Perf Configuration: 00:07:20.981 Workload Type: decompress 00:07:20.981 Transfer size: 4096 bytes 00:07:20.981 Vector count 1 00:07:20.981 Module: software 00:07:20.981 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.981 Queue depth: 32 00:07:20.981 Allocate depth: 32 00:07:20.981 # threads/core: 1 00:07:20.981 Run time: 1 seconds 00:07:20.981 Verify: Yes 00:07:20.981 00:07:20.981 Running for 1 seconds... 00:07:20.981 00:07:20.981 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.981 ------------------------------------------------------------------------------------ 00:07:20.981 0,0 61536/s 113 MiB/s 0 0 00:07:20.981 3,0 60480/s 111 MiB/s 0 0 00:07:20.981 2,0 60832/s 112 MiB/s 0 0 00:07:20.981 1,0 57248/s 105 MiB/s 0 0 00:07:20.981 ==================================================================================== 00:07:20.981 Total 240096/s 937 MiB/s 0 0' 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.981 11:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.981 11:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.981 11:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.981 11:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.981 11:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.981 11:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.981 11:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.981 11:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.981 11:03:31 -- accel/accel.sh@42 -- # jq -r . 00:07:20.981 [2024-12-06 11:03:31.762029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.981 [2024-12-06 11:03:31.762146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69006 ] 00:07:20.981 [2024-12-06 11:03:31.900957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.981 [2024-12-06 11:03:31.940604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.981 [2024-12-06 11:03:31.940716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.981 [2024-12-06 11:03:31.940845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.981 [2024-12-06 11:03:31.940852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val=0xf 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.981 11:03:31 -- accel/accel.sh@21 -- # val=decompress 00:07:20.981 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.981 11:03:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:20.981 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=software 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=32 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=32 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=1 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val=Yes 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.982 11:03:31 -- accel/accel.sh@21 -- # val= 00:07:20.982 11:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.982 11:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@21 -- # val= 00:07:22.354 11:03:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.354 11:03:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.354 11:03:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.354 11:03:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.354 11:03:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.354 ************************************ 00:07:22.354 END TEST accel_decomp_mcore 00:07:22.354 ************************************ 00:07:22.354 00:07:22.354 real 0m2.708s 00:07:22.354 user 0m8.826s 00:07:22.354 sys 0m0.174s 00:07:22.354 11:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.354 11:03:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.354 11:03:33 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.354 11:03:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:22.354 11:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.354 11:03:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.354 ************************************ 00:07:22.354 START TEST accel_decomp_full_mcore 00:07:22.354 ************************************ 00:07:22.354 11:03:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.354 11:03:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.354 11:03:33 -- accel/accel.sh@17 -- # local accel_module 00:07:22.354 11:03:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.354 11:03:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.354 11:03:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.354 11:03:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.354 11:03:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.354 11:03:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.354 11:03:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.354 11:03:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.354 11:03:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.354 11:03:33 -- accel/accel.sh@42 -- # jq -r . 00:07:22.354 [2024-12-06 11:03:33.162406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.354 [2024-12-06 11:03:33.162491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69044 ] 00:07:22.354 [2024-12-06 11:03:33.298267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.354 [2024-12-06 11:03:33.342803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.354 [2024-12-06 11:03:33.342896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.354 [2024-12-06 11:03:33.344018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.354 [2024-12-06 11:03:33.344064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.776 11:03:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.776 00:07:23.776 SPDK Configuration: 00:07:23.776 Core mask: 0xf 00:07:23.776 00:07:23.776 Accel Perf Configuration: 00:07:23.776 Workload Type: decompress 00:07:23.776 Transfer size: 111250 bytes 00:07:23.776 Vector count 1 00:07:23.776 Module: software 00:07:23.776 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.776 Queue depth: 32 00:07:23.776 Allocate depth: 32 00:07:23.776 # threads/core: 1 00:07:23.776 Run time: 1 seconds 00:07:23.776 Verify: Yes 00:07:23.776 00:07:23.776 Running for 1 seconds... 00:07:23.776 00:07:23.776 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.776 ------------------------------------------------------------------------------------ 00:07:23.776 0,0 4352/s 179 MiB/s 0 0 00:07:23.776 3,0 4352/s 179 MiB/s 0 0 00:07:23.776 2,0 4352/s 179 MiB/s 0 0 00:07:23.776 1,0 4352/s 179 MiB/s 0 0 00:07:23.776 ==================================================================================== 00:07:23.776 Total 17408/s 1846 MiB/s 0 0' 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.776 11:03:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.776 11:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.776 11:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.776 11:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.776 11:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.776 11:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.776 11:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.776 11:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.776 11:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.776 11:03:34 -- accel/accel.sh@42 -- # jq -r . 00:07:23.776 [2024-12-06 11:03:34.514968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.776 [2024-12-06 11:03:34.515069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69066 ] 00:07:23.776 [2024-12-06 11:03:34.648046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.776 [2024-12-06 11:03:34.683327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.776 [2024-12-06 11:03:34.683447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.776 [2024-12-06 11:03:34.683576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.776 [2024-12-06 11:03:34.683909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.776 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.776 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.776 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.776 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.776 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.776 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.776 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=0xf 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=decompress 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=software 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=32 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=32 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=1 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val=Yes 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.777 11:03:34 -- accel/accel.sh@21 -- # val= 00:07:23.777 11:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.777 11:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@21 -- # val= 00:07:24.709 11:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.709 11:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.709 11:03:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.709 11:03:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.709 11:03:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.709 00:07:24.709 real 0m2.685s 00:07:24.709 user 0m8.818s 00:07:24.709 sys 0m0.172s 00:07:24.709 ************************************ 00:07:24.709 END TEST accel_decomp_full_mcore 00:07:24.709 ************************************ 00:07:24.709 11:03:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.709 11:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:24.967 11:03:35 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:24.967 11:03:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:24.967 11:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.967 11:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:24.967 ************************************ 00:07:24.967 START TEST accel_decomp_mthread 00:07:24.967 ************************************ 00:07:24.967 11:03:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:24.967 11:03:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.967 11:03:35 -- accel/accel.sh@17 -- # local accel_module 00:07:24.967 11:03:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:24.967 11:03:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:24.967 11:03:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.967 11:03:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.967 11:03:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.967 11:03:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.967 11:03:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.967 11:03:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.967 11:03:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.967 11:03:35 -- accel/accel.sh@42 -- # jq -r . 00:07:24.967 [2024-12-06 11:03:35.897148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.967 [2024-12-06 11:03:35.897393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69098 ] 00:07:24.967 [2024-12-06 11:03:36.028425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.967 [2024-12-06 11:03:36.060737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.344 11:03:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.344 00:07:26.344 SPDK Configuration: 00:07:26.344 Core mask: 0x1 00:07:26.344 00:07:26.344 Accel Perf Configuration: 00:07:26.344 Workload Type: decompress 00:07:26.344 Transfer size: 4096 bytes 00:07:26.344 Vector count 1 00:07:26.344 Module: software 00:07:26.344 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.344 Queue depth: 32 00:07:26.344 Allocate depth: 32 00:07:26.344 # threads/core: 2 00:07:26.344 Run time: 1 seconds 00:07:26.344 Verify: Yes 00:07:26.344 00:07:26.344 Running for 1 seconds... 00:07:26.344 00:07:26.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.344 ------------------------------------------------------------------------------------ 00:07:26.344 0,1 36704/s 67 MiB/s 0 0 00:07:26.344 0,0 36608/s 67 MiB/s 0 0 00:07:26.344 ==================================================================================== 00:07:26.344 Total 73312/s 286 MiB/s 0 0' 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.344 11:03:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.344 11:03:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.344 11:03:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.344 11:03:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.344 11:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.344 11:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.344 11:03:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.344 11:03:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.344 11:03:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.344 11:03:37 -- accel/accel.sh@42 -- # jq -r . 00:07:26.344 [2024-12-06 11:03:37.216593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.344 [2024-12-06 11:03:37.216715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69118 ] 00:07:26.344 [2024-12-06 11:03:37.351839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.344 [2024-12-06 11:03:37.386298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.344 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.344 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.344 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.344 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.344 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.344 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.344 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=0x1 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=decompress 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=software 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=32 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=32 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=2 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val=Yes 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.345 11:03:37 -- accel/accel.sh@21 -- # val= 00:07:26.345 11:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.345 11:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@21 -- # val= 00:07:27.731 11:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.731 11:03:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.731 11:03:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.731 11:03:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.731 11:03:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.731 00:07:27.731 real 0m2.649s 00:07:27.731 user 0m2.301s 00:07:27.731 sys 0m0.145s 00:07:27.731 11:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.731 11:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.731 ************************************ 00:07:27.731 END TEST accel_decomp_mthread 00:07:27.731 ************************************ 00:07:27.731 11:03:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.731 11:03:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.731 11:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.731 11:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.731 ************************************ 00:07:27.731 START TEST accel_deomp_full_mthread 00:07:27.731 ************************************ 00:07:27.731 11:03:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.731 11:03:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.731 11:03:38 -- accel/accel.sh@17 -- # local accel_module 00:07:27.731 11:03:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.731 11:03:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.731 11:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.731 11:03:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.731 11:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.731 11:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.731 11:03:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.731 11:03:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.731 11:03:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.731 11:03:38 -- accel/accel.sh@42 -- # jq -r . 00:07:27.731 [2024-12-06 11:03:38.598810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.731 [2024-12-06 11:03:38.598898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69151 ] 00:07:27.731 [2024-12-06 11:03:38.737250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.731 [2024-12-06 11:03:38.772339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.132 11:03:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.132 00:07:29.132 SPDK Configuration: 00:07:29.132 Core mask: 0x1 00:07:29.132 00:07:29.132 Accel Perf Configuration: 00:07:29.132 Workload Type: decompress 00:07:29.132 Transfer size: 111250 bytes 00:07:29.132 Vector count 1 00:07:29.132 Module: software 00:07:29.132 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.132 Queue depth: 32 00:07:29.132 Allocate depth: 32 00:07:29.132 # threads/core: 2 00:07:29.132 Run time: 1 seconds 00:07:29.132 Verify: Yes 00:07:29.132 00:07:29.132 Running for 1 seconds... 00:07:29.132 00:07:29.133 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.133 ------------------------------------------------------------------------------------ 00:07:29.133 0,1 2464/s 101 MiB/s 0 0 00:07:29.133 0,0 2432/s 100 MiB/s 0 0 00:07:29.133 ==================================================================================== 00:07:29.133 Total 4896/s 519 MiB/s 0 0' 00:07:29.133 11:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.133 11:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.133 11:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.133 11:03:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.133 11:03:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.133 11:03:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.133 11:03:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.133 11:03:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.133 11:03:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.133 11:03:39 -- accel/accel.sh@42 -- # jq -r . 00:07:29.133 [2024-12-06 11:03:39.949505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.133 [2024-12-06 11:03:39.949616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69166 ] 00:07:29.133 [2024-12-06 11:03:40.083933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.133 [2024-12-06 11:03:40.119334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=0x1 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=decompress 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=software 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=32 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=32 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=2 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val=Yes 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.133 11:03:40 -- accel/accel.sh@21 -- # val= 00:07:29.133 11:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.133 11:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@21 -- # val= 00:07:30.511 11:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.511 11:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.511 11:03:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.511 11:03:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.511 11:03:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.511 00:07:30.511 real 0m2.692s 00:07:30.511 user 0m2.348s 00:07:30.511 sys 0m0.145s 00:07:30.511 11:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.511 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.511 ************************************ 00:07:30.511 END TEST accel_deomp_full_mthread 00:07:30.511 ************************************ 00:07:30.511 11:03:41 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:30.511 11:03:41 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:30.511 11:03:41 -- accel/accel.sh@129 -- # build_accel_config 00:07:30.511 11:03:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:30.511 11:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.511 11:03:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.511 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.511 11:03:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.511 11:03:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.511 11:03:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.511 11:03:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.511 11:03:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.511 11:03:41 -- accel/accel.sh@42 -- # jq -r . 00:07:30.511 ************************************ 00:07:30.511 START TEST accel_dif_functional_tests 00:07:30.511 ************************************ 00:07:30.511 11:03:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:30.511 [2024-12-06 11:03:41.366092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.511 [2024-12-06 11:03:41.366176] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69202 ] 00:07:30.511 [2024-12-06 11:03:41.504947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.511 [2024-12-06 11:03:41.539380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.511 [2024-12-06 11:03:41.539506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.511 [2024-12-06 11:03:41.539510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.511 00:07:30.511 00:07:30.511 CUnit - A unit testing framework for C - Version 2.1-3 00:07:30.511 http://cunit.sourceforge.net/ 00:07:30.511 00:07:30.511 00:07:30.511 Suite: accel_dif 00:07:30.511 Test: verify: DIF generated, GUARD check ...passed 00:07:30.511 Test: verify: DIF generated, APPTAG check ...passed 00:07:30.511 Test: verify: DIF generated, REFTAG check ...passed 00:07:30.511 Test: verify: DIF not generated, GUARD check ...passed 00:07:30.511 Test: verify: DIF not generated, APPTAG check ...[2024-12-06 11:03:41.587706] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:30.511 [2024-12-06 11:03:41.587913] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:30.511 [2024-12-06 11:03:41.587957] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:30.511 passed 00:07:30.511 Test: verify: DIF not generated, REFTAG check ...passed 00:07:30.511 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:30.511 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-06 11:03:41.587988] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:30.511 [2024-12-06 11:03:41.588018] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:30.511 [2024-12-06 11:03:41.588043] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:30.511 passed 00:07:30.511 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:30.511 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:30.511 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:30.511 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:30.511 Test: generate copy: DIF generated, GUARD check ...[2024-12-06 11:03:41.588143] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:30.511 [2024-12-06 11:03:41.588429] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:30.511 passed 00:07:30.511 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:30.511 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:30.511 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:30.511 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:30.511 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:30.511 Test: generate copy: iovecs-len validate ...passed 00:07:30.511 Test: generate copy: buffer alignment validate ...[2024-12-06 11:03:41.589147] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:30.511 passed 00:07:30.511 00:07:30.511 Run Summary: Type Total Ran Passed Failed Inactive 00:07:30.511 suites 1 1 n/a 0 0 00:07:30.511 tests 20 20 20 0 0 00:07:30.511 asserts 204 204 204 0 n/a 00:07:30.511 00:07:30.511 Elapsed time = 0.005 seconds 00:07:30.770 00:07:30.770 real 0m0.417s 00:07:30.770 user 0m0.480s 00:07:30.770 sys 0m0.096s 00:07:30.770 ************************************ 00:07:30.770 END TEST accel_dif_functional_tests 00:07:30.770 ************************************ 00:07:30.770 11:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.770 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.770 00:07:30.770 real 0m56.705s 00:07:30.770 user 1m2.011s 00:07:30.770 sys 0m4.188s 00:07:30.770 11:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.770 ************************************ 00:07:30.770 END TEST accel 00:07:30.770 ************************************ 00:07:30.770 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.770 11:03:41 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:30.770 11:03:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.770 11:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.770 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.770 ************************************ 00:07:30.770 START TEST accel_rpc 00:07:30.770 ************************************ 00:07:30.770 11:03:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:30.770 * Looking for test storage... 00:07:30.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:30.770 11:03:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:30.770 11:03:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:30.770 11:03:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.029 11:03:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.029 11:03:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.029 11:03:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.029 11:03:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.029 11:03:41 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.029 11:03:41 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.029 11:03:41 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.029 11:03:41 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.029 11:03:41 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.029 11:03:41 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.029 11:03:41 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.029 11:03:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.029 11:03:41 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.029 11:03:41 -- scripts/common.sh@344 -- # : 1 00:07:31.029 11:03:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.029 11:03:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.029 11:03:41 -- scripts/common.sh@364 -- # decimal 1 00:07:31.029 11:03:41 -- scripts/common.sh@352 -- # local d=1 00:07:31.029 11:03:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.029 11:03:41 -- scripts/common.sh@354 -- # echo 1 00:07:31.029 11:03:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.029 11:03:41 -- scripts/common.sh@365 -- # decimal 2 00:07:31.029 11:03:41 -- scripts/common.sh@352 -- # local d=2 00:07:31.029 11:03:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.029 11:03:41 -- scripts/common.sh@354 -- # echo 2 00:07:31.029 11:03:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.029 11:03:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.029 11:03:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.029 11:03:41 -- scripts/common.sh@367 -- # return 0 00:07:31.029 11:03:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.029 11:03:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.029 --rc genhtml_branch_coverage=1 00:07:31.029 --rc genhtml_function_coverage=1 00:07:31.029 --rc genhtml_legend=1 00:07:31.029 --rc geninfo_all_blocks=1 00:07:31.029 --rc geninfo_unexecuted_blocks=1 00:07:31.029 00:07:31.029 ' 00:07:31.029 11:03:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.029 --rc genhtml_branch_coverage=1 00:07:31.030 --rc genhtml_function_coverage=1 00:07:31.030 --rc genhtml_legend=1 00:07:31.030 --rc geninfo_all_blocks=1 00:07:31.030 --rc geninfo_unexecuted_blocks=1 00:07:31.030 00:07:31.030 ' 00:07:31.030 11:03:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.030 --rc genhtml_branch_coverage=1 00:07:31.030 --rc genhtml_function_coverage=1 00:07:31.030 --rc genhtml_legend=1 00:07:31.030 --rc geninfo_all_blocks=1 00:07:31.030 --rc geninfo_unexecuted_blocks=1 00:07:31.030 00:07:31.030 ' 00:07:31.030 11:03:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.030 --rc genhtml_branch_coverage=1 00:07:31.030 --rc genhtml_function_coverage=1 00:07:31.030 --rc genhtml_legend=1 00:07:31.030 --rc geninfo_all_blocks=1 00:07:31.030 --rc geninfo_unexecuted_blocks=1 00:07:31.030 00:07:31.030 ' 00:07:31.030 11:03:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.030 11:03:41 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:31.030 11:03:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69273 00:07:31.030 11:03:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 69273 00:07:31.030 11:03:41 -- common/autotest_common.sh@829 -- # '[' -z 69273 ']' 00:07:31.030 11:03:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.030 11:03:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.030 11:03:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.030 11:03:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.030 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:31.030 [2024-12-06 11:03:42.051249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.030 [2024-12-06 11:03:42.051564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69273 ] 00:07:31.289 [2024-12-06 11:03:42.187194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.289 [2024-12-06 11:03:42.220462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.289 [2024-12-06 11:03:42.220905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.289 11:03:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.289 11:03:42 -- common/autotest_common.sh@862 -- # return 0 00:07:31.289 11:03:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:31.289 11:03:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:31.289 11:03:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:31.289 11:03:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:31.289 11:03:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:31.289 11:03:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.289 11:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.289 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.289 ************************************ 00:07:31.289 START TEST accel_assign_opcode 00:07:31.289 ************************************ 00:07:31.289 11:03:42 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:31.290 11:03:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:31.290 11:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.290 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.290 [2024-12-06 11:03:42.309385] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:31.290 11:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.290 11:03:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:31.290 11:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.290 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.290 [2024-12-06 11:03:42.317384] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:31.290 11:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.290 11:03:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:31.290 11:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.290 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.548 11:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.548 11:03:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:31.548 11:03:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:31.548 11:03:42 -- accel/accel_rpc.sh@42 -- # grep software 00:07:31.548 11:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.548 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.548 11:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.548 software 00:07:31.548 ************************************ 00:07:31.548 END TEST accel_assign_opcode 00:07:31.548 ************************************ 00:07:31.548 00:07:31.548 real 0m0.199s 00:07:31.549 user 0m0.054s 00:07:31.549 sys 0m0.011s 00:07:31.549 11:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.549 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.549 11:03:42 -- accel/accel_rpc.sh@55 -- # killprocess 69273 00:07:31.549 11:03:42 -- common/autotest_common.sh@936 -- # '[' -z 69273 ']' 00:07:31.549 11:03:42 -- common/autotest_common.sh@940 -- # kill -0 69273 00:07:31.549 11:03:42 -- common/autotest_common.sh@941 -- # uname 00:07:31.549 11:03:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.549 11:03:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69273 00:07:31.549 killing process with pid 69273 00:07:31.549 11:03:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:31.549 11:03:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:31.549 11:03:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69273' 00:07:31.549 11:03:42 -- common/autotest_common.sh@955 -- # kill 69273 00:07:31.549 11:03:42 -- common/autotest_common.sh@960 -- # wait 69273 00:07:31.807 00:07:31.807 real 0m1.002s 00:07:31.807 user 0m1.013s 00:07:31.807 sys 0m0.311s 00:07:31.807 ************************************ 00:07:31.807 END TEST accel_rpc 00:07:31.807 ************************************ 00:07:31.807 11:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.807 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.807 11:03:42 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:31.807 11:03:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.807 11:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.807 11:03:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.807 ************************************ 00:07:31.807 START TEST app_cmdline 00:07:31.807 ************************************ 00:07:31.807 11:03:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:31.807 * Looking for test storage... 00:07:31.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:31.807 11:03:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.807 11:03:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.807 11:03:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:32.065 11:03:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:32.065 11:03:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:32.065 11:03:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:32.065 11:03:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:32.065 11:03:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:32.065 11:03:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:32.065 11:03:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.065 11:03:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:32.065 11:03:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:32.065 11:03:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:32.065 11:03:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:32.065 11:03:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:32.065 11:03:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:32.065 11:03:43 -- scripts/common.sh@344 -- # : 1 00:07:32.065 11:03:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:32.065 11:03:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.065 11:03:43 -- scripts/common.sh@364 -- # decimal 1 00:07:32.065 11:03:43 -- scripts/common.sh@352 -- # local d=1 00:07:32.065 11:03:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.065 11:03:43 -- scripts/common.sh@354 -- # echo 1 00:07:32.065 11:03:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:32.065 11:03:43 -- scripts/common.sh@365 -- # decimal 2 00:07:32.065 11:03:43 -- scripts/common.sh@352 -- # local d=2 00:07:32.065 11:03:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.065 11:03:43 -- scripts/common.sh@354 -- # echo 2 00:07:32.065 11:03:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:32.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.065 11:03:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:32.065 11:03:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:32.065 11:03:43 -- scripts/common.sh@367 -- # return 0 00:07:32.065 11:03:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.065 11:03:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:32.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.065 --rc genhtml_branch_coverage=1 00:07:32.065 --rc genhtml_function_coverage=1 00:07:32.065 --rc genhtml_legend=1 00:07:32.065 --rc geninfo_all_blocks=1 00:07:32.065 --rc geninfo_unexecuted_blocks=1 00:07:32.066 00:07:32.066 ' 00:07:32.066 11:03:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:32.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.066 --rc genhtml_branch_coverage=1 00:07:32.066 --rc genhtml_function_coverage=1 00:07:32.066 --rc genhtml_legend=1 00:07:32.066 --rc geninfo_all_blocks=1 00:07:32.066 --rc geninfo_unexecuted_blocks=1 00:07:32.066 00:07:32.066 ' 00:07:32.066 11:03:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:32.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.066 --rc genhtml_branch_coverage=1 00:07:32.066 --rc genhtml_function_coverage=1 00:07:32.066 --rc genhtml_legend=1 00:07:32.066 --rc geninfo_all_blocks=1 00:07:32.066 --rc geninfo_unexecuted_blocks=1 00:07:32.066 00:07:32.066 ' 00:07:32.066 11:03:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:32.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.066 --rc genhtml_branch_coverage=1 00:07:32.066 --rc genhtml_function_coverage=1 00:07:32.066 --rc genhtml_legend=1 00:07:32.066 --rc geninfo_all_blocks=1 00:07:32.066 --rc geninfo_unexecuted_blocks=1 00:07:32.066 00:07:32.066 ' 00:07:32.066 11:03:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:32.066 11:03:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69360 00:07:32.066 11:03:43 -- app/cmdline.sh@18 -- # waitforlisten 69360 00:07:32.066 11:03:43 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:32.066 11:03:43 -- common/autotest_common.sh@829 -- # '[' -z 69360 ']' 00:07:32.066 11:03:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.066 11:03:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.066 11:03:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.066 11:03:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.066 11:03:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.066 [2024-12-06 11:03:43.104504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.066 [2024-12-06 11:03:43.104830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69360 ] 00:07:32.325 [2024-12-06 11:03:43.243837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.325 [2024-12-06 11:03:43.274788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.325 [2024-12-06 11:03:43.275164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.262 11:03:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.262 11:03:44 -- common/autotest_common.sh@862 -- # return 0 00:07:33.262 11:03:44 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:33.262 { 00:07:33.262 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:33.262 "fields": { 00:07:33.262 "major": 24, 00:07:33.262 "minor": 1, 00:07:33.262 "patch": 1, 00:07:33.262 "suffix": "-pre", 00:07:33.262 "commit": "c13c99a5e" 00:07:33.262 } 00:07:33.262 } 00:07:33.262 11:03:44 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:33.262 11:03:44 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:33.262 11:03:44 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:33.262 11:03:44 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:33.262 11:03:44 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:33.262 11:03:44 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:33.262 11:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.262 11:03:44 -- app/cmdline.sh@26 -- # sort 00:07:33.262 11:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.262 11:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.522 11:03:44 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:33.522 11:03:44 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:33.522 11:03:44 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.522 11:03:44 -- common/autotest_common.sh@650 -- # local es=0 00:07:33.522 11:03:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.522 11:03:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.522 11:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.522 11:03:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.522 11:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.522 11:03:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.522 11:03:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.522 11:03:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.522 11:03:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:33.522 11:03:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.781 request: 00:07:33.781 { 00:07:33.781 "method": "env_dpdk_get_mem_stats", 00:07:33.781 "req_id": 1 00:07:33.781 } 00:07:33.781 Got JSON-RPC error response 00:07:33.781 response: 00:07:33.781 { 00:07:33.781 "code": -32601, 00:07:33.781 "message": "Method not found" 00:07:33.781 } 00:07:33.781 11:03:44 -- common/autotest_common.sh@653 -- # es=1 00:07:33.781 11:03:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.781 11:03:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.781 11:03:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.781 11:03:44 -- app/cmdline.sh@1 -- # killprocess 69360 00:07:33.781 11:03:44 -- common/autotest_common.sh@936 -- # '[' -z 69360 ']' 00:07:33.781 11:03:44 -- common/autotest_common.sh@940 -- # kill -0 69360 00:07:33.781 11:03:44 -- common/autotest_common.sh@941 -- # uname 00:07:33.781 11:03:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.781 11:03:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69360 00:07:33.781 killing process with pid 69360 00:07:33.781 11:03:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.781 11:03:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.781 11:03:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69360' 00:07:33.781 11:03:44 -- common/autotest_common.sh@955 -- # kill 69360 00:07:33.781 11:03:44 -- common/autotest_common.sh@960 -- # wait 69360 00:07:34.040 00:07:34.040 real 0m2.101s 00:07:34.040 user 0m2.774s 00:07:34.040 sys 0m0.395s 00:07:34.040 11:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.040 11:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:34.040 ************************************ 00:07:34.040 END TEST app_cmdline 00:07:34.040 ************************************ 00:07:34.040 11:03:45 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.040 11:03:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.040 11:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.040 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.040 ************************************ 00:07:34.040 START TEST version 00:07:34.040 ************************************ 00:07:34.040 11:03:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.040 * Looking for test storage... 00:07:34.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:34.040 11:03:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:34.040 11:03:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:34.040 11:03:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:34.040 11:03:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:34.040 11:03:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:34.040 11:03:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:34.040 11:03:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:34.040 11:03:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:34.040 11:03:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:34.040 11:03:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.040 11:03:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:34.040 11:03:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:34.040 11:03:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:34.040 11:03:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:34.040 11:03:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:34.040 11:03:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:34.040 11:03:45 -- scripts/common.sh@344 -- # : 1 00:07:34.040 11:03:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:34.040 11:03:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.040 11:03:45 -- scripts/common.sh@364 -- # decimal 1 00:07:34.040 11:03:45 -- scripts/common.sh@352 -- # local d=1 00:07:34.040 11:03:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.040 11:03:45 -- scripts/common.sh@354 -- # echo 1 00:07:34.040 11:03:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:34.040 11:03:45 -- scripts/common.sh@365 -- # decimal 2 00:07:34.040 11:03:45 -- scripts/common.sh@352 -- # local d=2 00:07:34.040 11:03:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.040 11:03:45 -- scripts/common.sh@354 -- # echo 2 00:07:34.040 11:03:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:34.040 11:03:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:34.040 11:03:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:34.040 11:03:45 -- scripts/common.sh@367 -- # return 0 00:07:34.040 11:03:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.040 11:03:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.040 --rc genhtml_branch_coverage=1 00:07:34.040 --rc genhtml_function_coverage=1 00:07:34.040 --rc genhtml_legend=1 00:07:34.040 --rc geninfo_all_blocks=1 00:07:34.040 --rc geninfo_unexecuted_blocks=1 00:07:34.040 00:07:34.040 ' 00:07:34.040 11:03:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:34.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.040 --rc genhtml_branch_coverage=1 00:07:34.040 --rc genhtml_function_coverage=1 00:07:34.040 --rc genhtml_legend=1 00:07:34.040 --rc geninfo_all_blocks=1 00:07:34.040 --rc geninfo_unexecuted_blocks=1 00:07:34.040 00:07:34.040 ' 00:07:34.040 11:03:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:34.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.041 --rc genhtml_branch_coverage=1 00:07:34.041 --rc genhtml_function_coverage=1 00:07:34.041 --rc genhtml_legend=1 00:07:34.041 --rc geninfo_all_blocks=1 00:07:34.041 --rc geninfo_unexecuted_blocks=1 00:07:34.041 00:07:34.041 ' 00:07:34.041 11:03:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:34.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.041 --rc genhtml_branch_coverage=1 00:07:34.041 --rc genhtml_function_coverage=1 00:07:34.041 --rc genhtml_legend=1 00:07:34.041 --rc geninfo_all_blocks=1 00:07:34.041 --rc geninfo_unexecuted_blocks=1 00:07:34.041 00:07:34.041 ' 00:07:34.299 11:03:45 -- app/version.sh@17 -- # get_header_version major 00:07:34.299 11:03:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.299 11:03:45 -- app/version.sh@14 -- # cut -f2 00:07:34.299 11:03:45 -- app/version.sh@14 -- # tr -d '"' 00:07:34.299 11:03:45 -- app/version.sh@17 -- # major=24 00:07:34.299 11:03:45 -- app/version.sh@18 -- # get_header_version minor 00:07:34.299 11:03:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.299 11:03:45 -- app/version.sh@14 -- # cut -f2 00:07:34.299 11:03:45 -- app/version.sh@14 -- # tr -d '"' 00:07:34.299 11:03:45 -- app/version.sh@18 -- # minor=1 00:07:34.299 11:03:45 -- app/version.sh@19 -- # get_header_version patch 00:07:34.299 11:03:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.299 11:03:45 -- app/version.sh@14 -- # cut -f2 00:07:34.299 11:03:45 -- app/version.sh@14 -- # tr -d '"' 00:07:34.299 11:03:45 -- app/version.sh@19 -- # patch=1 00:07:34.299 11:03:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:34.299 11:03:45 -- app/version.sh@14 -- # cut -f2 00:07:34.299 11:03:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.299 11:03:45 -- app/version.sh@14 -- # tr -d '"' 00:07:34.299 11:03:45 -- app/version.sh@20 -- # suffix=-pre 00:07:34.299 11:03:45 -- app/version.sh@22 -- # version=24.1 00:07:34.299 11:03:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:34.299 11:03:45 -- app/version.sh@25 -- # version=24.1.1 00:07:34.299 11:03:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:34.299 11:03:45 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:34.299 11:03:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:34.299 11:03:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:34.299 11:03:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:34.299 ************************************ 00:07:34.299 END TEST version 00:07:34.299 ************************************ 00:07:34.299 00:07:34.299 real 0m0.228s 00:07:34.299 user 0m0.140s 00:07:34.299 sys 0m0.126s 00:07:34.299 11:03:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.299 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 11:03:45 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:34.299 11:03:45 -- spdk/autotest.sh@191 -- # uname -s 00:07:34.299 11:03:45 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:34.299 11:03:45 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:34.299 11:03:45 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:34.299 11:03:45 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:34.299 11:03:45 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:34.299 11:03:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:34.299 11:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.299 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 ************************************ 00:07:34.299 START TEST spdk_dd 00:07:34.299 ************************************ 00:07:34.299 11:03:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:34.299 * Looking for test storage... 00:07:34.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.299 11:03:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:34.299 11:03:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:34.299 11:03:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:34.558 11:03:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:34.558 11:03:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:34.558 11:03:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:34.558 11:03:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:34.558 11:03:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:34.558 11:03:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:34.558 11:03:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.558 11:03:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:34.558 11:03:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:34.558 11:03:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:34.558 11:03:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:34.558 11:03:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:34.558 11:03:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:34.558 11:03:45 -- scripts/common.sh@344 -- # : 1 00:07:34.558 11:03:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:34.558 11:03:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.558 11:03:45 -- scripts/common.sh@364 -- # decimal 1 00:07:34.558 11:03:45 -- scripts/common.sh@352 -- # local d=1 00:07:34.558 11:03:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.558 11:03:45 -- scripts/common.sh@354 -- # echo 1 00:07:34.558 11:03:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:34.558 11:03:45 -- scripts/common.sh@365 -- # decimal 2 00:07:34.558 11:03:45 -- scripts/common.sh@352 -- # local d=2 00:07:34.558 11:03:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.558 11:03:45 -- scripts/common.sh@354 -- # echo 2 00:07:34.558 11:03:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:34.558 11:03:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:34.558 11:03:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:34.558 11:03:45 -- scripts/common.sh@367 -- # return 0 00:07:34.558 11:03:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.558 11:03:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:34.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.558 --rc genhtml_branch_coverage=1 00:07:34.558 --rc genhtml_function_coverage=1 00:07:34.558 --rc genhtml_legend=1 00:07:34.558 --rc geninfo_all_blocks=1 00:07:34.558 --rc geninfo_unexecuted_blocks=1 00:07:34.558 00:07:34.558 ' 00:07:34.559 11:03:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.559 --rc genhtml_branch_coverage=1 00:07:34.559 --rc genhtml_function_coverage=1 00:07:34.559 --rc genhtml_legend=1 00:07:34.559 --rc geninfo_all_blocks=1 00:07:34.559 --rc geninfo_unexecuted_blocks=1 00:07:34.559 00:07:34.559 ' 00:07:34.559 11:03:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.559 --rc genhtml_branch_coverage=1 00:07:34.559 --rc genhtml_function_coverage=1 00:07:34.559 --rc genhtml_legend=1 00:07:34.559 --rc geninfo_all_blocks=1 00:07:34.559 --rc geninfo_unexecuted_blocks=1 00:07:34.559 00:07:34.559 ' 00:07:34.559 11:03:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.559 --rc genhtml_branch_coverage=1 00:07:34.559 --rc genhtml_function_coverage=1 00:07:34.559 --rc genhtml_legend=1 00:07:34.559 --rc geninfo_all_blocks=1 00:07:34.559 --rc geninfo_unexecuted_blocks=1 00:07:34.559 00:07:34.559 ' 00:07:34.559 11:03:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.559 11:03:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.559 11:03:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.559 11:03:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.559 11:03:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.559 11:03:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.559 11:03:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.559 11:03:45 -- paths/export.sh@5 -- # export PATH 00:07:34.559 11:03:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.559 11:03:45 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:34.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.818 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:34.818 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:34.818 11:03:45 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:34.818 11:03:45 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:34.818 11:03:45 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:34.818 11:03:45 -- scripts/common.sh@312 -- # local nvmes 00:07:34.818 11:03:45 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:34.818 11:03:45 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:34.818 11:03:45 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:34.818 11:03:45 -- scripts/common.sh@297 -- # local bdf= 00:07:34.818 11:03:45 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:34.818 11:03:45 -- scripts/common.sh@232 -- # local class 00:07:34.818 11:03:45 -- scripts/common.sh@233 -- # local subclass 00:07:34.818 11:03:45 -- scripts/common.sh@234 -- # local progif 00:07:34.818 11:03:45 -- scripts/common.sh@235 -- # printf %02x 1 00:07:34.818 11:03:45 -- scripts/common.sh@235 -- # class=01 00:07:34.818 11:03:45 -- scripts/common.sh@236 -- # printf %02x 8 00:07:34.818 11:03:45 -- scripts/common.sh@236 -- # subclass=08 00:07:34.818 11:03:45 -- scripts/common.sh@237 -- # printf %02x 2 00:07:34.818 11:03:45 -- scripts/common.sh@237 -- # progif=02 00:07:34.818 11:03:45 -- scripts/common.sh@239 -- # hash lspci 00:07:34.818 11:03:45 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:34.818 11:03:45 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:34.818 11:03:45 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:34.818 11:03:45 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:34.818 11:03:45 -- scripts/common.sh@244 -- # tr -d '"' 00:07:34.818 11:03:45 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:34.818 11:03:45 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:34.818 11:03:45 -- scripts/common.sh@15 -- # local i 00:07:34.818 11:03:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:34.818 11:03:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:34.818 11:03:45 -- scripts/common.sh@24 -- # return 0 00:07:34.819 11:03:45 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:34.819 11:03:45 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:34.819 11:03:45 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:34.819 11:03:45 -- scripts/common.sh@15 -- # local i 00:07:34.819 11:03:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:34.819 11:03:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:34.819 11:03:45 -- scripts/common.sh@24 -- # return 0 00:07:34.819 11:03:45 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:34.819 11:03:45 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:34.819 11:03:45 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:34.819 11:03:45 -- scripts/common.sh@322 -- # uname -s 00:07:34.819 11:03:45 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:34.819 11:03:45 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:34.819 11:03:45 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:34.819 11:03:45 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:34.819 11:03:45 -- scripts/common.sh@322 -- # uname -s 00:07:34.819 11:03:45 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:34.819 11:03:45 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:34.819 11:03:45 -- scripts/common.sh@327 -- # (( 2 )) 00:07:34.819 11:03:45 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:34.819 11:03:45 -- dd/dd.sh@13 -- # check_liburing 00:07:34.819 11:03:45 -- dd/common.sh@139 -- # local lib so 00:07:34.819 11:03:45 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:34.819 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:34.819 11:03:45 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:34.819 11:03:45 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.819 11:03:45 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:34.819 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:34.819 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:34.819 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.080 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:35.080 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.080 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:35.080 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:35.081 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.081 11:03:45 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:35.082 11:03:45 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:35.082 11:03:45 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:35.082 * spdk_dd linked to liburing 00:07:35.082 11:03:45 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:35.082 11:03:45 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:35.082 11:03:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.082 11:03:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.082 11:03:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.082 11:03:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.082 11:03:45 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:35.082 11:03:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.082 11:03:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.082 11:03:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:35.082 11:03:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.082 11:03:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.082 11:03:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.082 11:03:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.082 11:03:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.082 11:03:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.082 11:03:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.082 11:03:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.082 11:03:46 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:35.082 11:03:46 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.082 11:03:46 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:35.082 11:03:46 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:35.082 11:03:46 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.082 11:03:46 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:35.082 11:03:46 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.082 11:03:46 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:35.082 11:03:46 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.082 11:03:46 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.082 11:03:46 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.082 11:03:46 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:35.082 11:03:46 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.082 11:03:46 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:35.082 11:03:46 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:35.082 11:03:46 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:35.082 11:03:46 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:35.082 11:03:46 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:35.082 11:03:46 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:35.082 11:03:46 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:35.082 11:03:46 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:35.082 11:03:46 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:35.082 11:03:46 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:35.082 11:03:46 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:35.082 11:03:46 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:35.082 11:03:46 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:35.082 11:03:46 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:35.082 11:03:46 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.082 11:03:46 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:35.082 11:03:46 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:35.082 11:03:46 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:35.082 11:03:46 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.082 11:03:46 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:35.082 11:03:46 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:35.082 11:03:46 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:35.082 11:03:46 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:35.082 11:03:46 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:35.082 11:03:46 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:35.082 11:03:46 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.082 11:03:46 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:35.082 11:03:46 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.082 11:03:46 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:35.082 11:03:46 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:35.082 11:03:46 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:35.082 11:03:46 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:35.082 11:03:46 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:35.082 11:03:46 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:35.082 11:03:46 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:35.082 11:03:46 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:35.082 11:03:46 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.082 11:03:46 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:35.082 11:03:46 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:35.082 11:03:46 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:35.082 11:03:46 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:35.082 11:03:46 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:35.082 11:03:46 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:35.082 11:03:46 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.082 11:03:46 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:35.082 11:03:46 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:35.082 11:03:46 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:35.082 11:03:46 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.082 11:03:46 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:35.082 11:03:46 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:35.082 11:03:46 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:35.082 11:03:46 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:35.082 11:03:46 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:35.082 11:03:46 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:35.082 11:03:46 -- dd/common.sh@157 -- # return 0 00:07:35.082 11:03:46 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:35.082 11:03:46 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:35.082 11:03:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:35.082 11:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.082 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.082 ************************************ 00:07:35.082 START TEST spdk_dd_basic_rw 00:07:35.082 ************************************ 00:07:35.082 11:03:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:35.082 * Looking for test storage... 00:07:35.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.082 11:03:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.082 11:03:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.082 11:03:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.082 11:03:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.082 11:03:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:35.082 11:03:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:35.082 11:03:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:35.082 11:03:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:35.082 11:03:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:35.082 11:03:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.082 11:03:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:35.082 11:03:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:35.082 11:03:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:35.082 11:03:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:35.082 11:03:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:35.082 11:03:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:35.082 11:03:46 -- scripts/common.sh@344 -- # : 1 00:07:35.082 11:03:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:35.082 11:03:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.082 11:03:46 -- scripts/common.sh@364 -- # decimal 1 00:07:35.082 11:03:46 -- scripts/common.sh@352 -- # local d=1 00:07:35.082 11:03:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.082 11:03:46 -- scripts/common.sh@354 -- # echo 1 00:07:35.082 11:03:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:35.082 11:03:46 -- scripts/common.sh@365 -- # decimal 2 00:07:35.082 11:03:46 -- scripts/common.sh@352 -- # local d=2 00:07:35.082 11:03:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.082 11:03:46 -- scripts/common.sh@354 -- # echo 2 00:07:35.082 11:03:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:35.082 11:03:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:35.082 11:03:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:35.082 11:03:46 -- scripts/common.sh@367 -- # return 0 00:07:35.082 11:03:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.082 11:03:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.082 --rc genhtml_branch_coverage=1 00:07:35.082 --rc genhtml_function_coverage=1 00:07:35.082 --rc genhtml_legend=1 00:07:35.082 --rc geninfo_all_blocks=1 00:07:35.083 --rc geninfo_unexecuted_blocks=1 00:07:35.083 00:07:35.083 ' 00:07:35.083 11:03:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.083 --rc genhtml_branch_coverage=1 00:07:35.083 --rc genhtml_function_coverage=1 00:07:35.083 --rc genhtml_legend=1 00:07:35.083 --rc geninfo_all_blocks=1 00:07:35.083 --rc geninfo_unexecuted_blocks=1 00:07:35.083 00:07:35.083 ' 00:07:35.083 11:03:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.083 --rc genhtml_branch_coverage=1 00:07:35.083 --rc genhtml_function_coverage=1 00:07:35.083 --rc genhtml_legend=1 00:07:35.083 --rc geninfo_all_blocks=1 00:07:35.083 --rc geninfo_unexecuted_blocks=1 00:07:35.083 00:07:35.083 ' 00:07:35.083 11:03:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.083 --rc genhtml_branch_coverage=1 00:07:35.083 --rc genhtml_function_coverage=1 00:07:35.083 --rc genhtml_legend=1 00:07:35.083 --rc geninfo_all_blocks=1 00:07:35.083 --rc geninfo_unexecuted_blocks=1 00:07:35.083 00:07:35.083 ' 00:07:35.083 11:03:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.083 11:03:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.083 11:03:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.083 11:03:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.083 11:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.083 11:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.083 11:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.083 11:03:46 -- paths/export.sh@5 -- # export PATH 00:07:35.083 11:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.083 11:03:46 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:35.083 11:03:46 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:35.083 11:03:46 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:35.083 11:03:46 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:35.083 11:03:46 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:35.083 11:03:46 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:35.083 11:03:46 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:35.083 11:03:46 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.083 11:03:46 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.083 11:03:46 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:35.083 11:03:46 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:35.083 11:03:46 -- dd/common.sh@126 -- # mapfile -t id 00:07:35.083 11:03:46 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:35.345 11:03:46 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 9 Host Read Commands: 2235 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:35.345 11:03:46 -- dd/common.sh@130 -- # lbaf=04 00:07:35.346 11:03:46 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 9 Host Read Commands: 2235 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:35.346 11:03:46 -- dd/common.sh@132 -- # lbaf=4096 00:07:35.346 11:03:46 -- dd/common.sh@134 -- # echo 4096 00:07:35.346 11:03:46 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:35.346 11:03:46 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:35.346 11:03:46 -- dd/basic_rw.sh@96 -- # : 00:07:35.346 11:03:46 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:35.346 11:03:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:35.346 11:03:46 -- dd/common.sh@31 -- # xtrace_disable 00:07:35.346 11:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.346 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.346 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.346 ************************************ 00:07:35.346 START TEST dd_bs_lt_native_bs 00:07:35.346 ************************************ 00:07:35.346 11:03:46 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:35.346 11:03:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:35.346 11:03:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:35.346 11:03:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.346 11:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.346 11:03:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.346 11:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.346 11:03:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.346 11:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.346 11:03:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.346 11:03:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.346 11:03:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:35.346 { 00:07:35.346 "subsystems": [ 00:07:35.346 { 00:07:35.346 "subsystem": "bdev", 00:07:35.346 "config": [ 00:07:35.346 { 00:07:35.346 "params": { 00:07:35.346 "trtype": "pcie", 00:07:35.346 "traddr": "0000:00:06.0", 00:07:35.346 "name": "Nvme0" 00:07:35.346 }, 00:07:35.346 "method": "bdev_nvme_attach_controller" 00:07:35.346 }, 00:07:35.346 { 00:07:35.346 "method": "bdev_wait_for_examine" 00:07:35.346 } 00:07:35.346 ] 00:07:35.346 } 00:07:35.346 ] 00:07:35.346 } 00:07:35.346 [2024-12-06 11:03:46.467332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.346 [2024-12-06 11:03:46.467421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69710 ] 00:07:35.606 [2024-12-06 11:03:46.607835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.606 [2024-12-06 11:03:46.648068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.865 [2024-12-06 11:03:46.764779] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:35.865 [2024-12-06 11:03:46.764851] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.865 [2024-12-06 11:03:46.836824] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:35.865 ************************************ 00:07:35.865 END TEST dd_bs_lt_native_bs 00:07:35.865 ************************************ 00:07:35.865 11:03:46 -- common/autotest_common.sh@653 -- # es=234 00:07:35.865 11:03:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.865 11:03:46 -- common/autotest_common.sh@662 -- # es=106 00:07:35.865 11:03:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:35.865 11:03:46 -- common/autotest_common.sh@670 -- # es=1 00:07:35.865 11:03:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.865 00:07:35.865 real 0m0.493s 00:07:35.865 user 0m0.326s 00:07:35.865 sys 0m0.124s 00:07:35.865 11:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.865 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.865 11:03:46 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:35.865 11:03:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:35.865 11:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.865 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.865 ************************************ 00:07:35.865 START TEST dd_rw 00:07:35.865 ************************************ 00:07:35.865 11:03:46 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:35.865 11:03:46 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:35.865 11:03:46 -- dd/basic_rw.sh@12 -- # local count size 00:07:35.865 11:03:46 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:35.865 11:03:46 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:35.865 11:03:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:35.865 11:03:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:35.865 11:03:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:35.865 11:03:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:35.865 11:03:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:35.865 11:03:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:35.865 11:03:46 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:35.865 11:03:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:35.865 11:03:46 -- dd/basic_rw.sh@23 -- # count=15 00:07:35.865 11:03:46 -- dd/basic_rw.sh@24 -- # count=15 00:07:35.865 11:03:46 -- dd/basic_rw.sh@25 -- # size=61440 00:07:35.865 11:03:46 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:35.865 11:03:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:35.865 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:36.431 11:03:47 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:36.431 11:03:47 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:36.431 11:03:47 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.431 11:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.431 [2024-12-06 11:03:47.550676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.431 [2024-12-06 11:03:47.550945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69741 ] 00:07:36.431 { 00:07:36.431 "subsystems": [ 00:07:36.431 { 00:07:36.431 "subsystem": "bdev", 00:07:36.431 "config": [ 00:07:36.431 { 00:07:36.431 "params": { 00:07:36.431 "trtype": "pcie", 00:07:36.431 "traddr": "0000:00:06.0", 00:07:36.431 "name": "Nvme0" 00:07:36.431 }, 00:07:36.431 "method": "bdev_nvme_attach_controller" 00:07:36.431 }, 00:07:36.431 { 00:07:36.432 "method": "bdev_wait_for_examine" 00:07:36.432 } 00:07:36.432 ] 00:07:36.432 } 00:07:36.432 ] 00:07:36.432 } 00:07:36.691 [2024-12-06 11:03:47.688170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.691 [2024-12-06 11:03:47.718646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.691  [2024-12-06T11:03:48.097Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:36.950 00:07:36.950 11:03:47 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:36.950 11:03:47 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.950 11:03:47 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.950 11:03:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.950 [2024-12-06 11:03:48.032760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.950 [2024-12-06 11:03:48.033014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:07:36.950 { 00:07:36.950 "subsystems": [ 00:07:36.950 { 00:07:36.950 "subsystem": "bdev", 00:07:36.950 "config": [ 00:07:36.950 { 00:07:36.950 "params": { 00:07:36.950 "trtype": "pcie", 00:07:36.950 "traddr": "0000:00:06.0", 00:07:36.950 "name": "Nvme0" 00:07:36.950 }, 00:07:36.950 "method": "bdev_nvme_attach_controller" 00:07:36.950 }, 00:07:36.950 { 00:07:36.950 "method": "bdev_wait_for_examine" 00:07:36.950 } 00:07:36.950 ] 00:07:36.950 } 00:07:36.950 ] 00:07:36.950 } 00:07:37.209 [2024-12-06 11:03:48.171918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.209 [2024-12-06 11:03:48.203078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.209  [2024-12-06T11:03:48.614Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:37.467 00:07:37.467 11:03:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.467 11:03:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:37.467 11:03:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:37.467 11:03:48 -- dd/common.sh@11 -- # local nvme_ref= 00:07:37.467 11:03:48 -- dd/common.sh@12 -- # local size=61440 00:07:37.468 11:03:48 -- dd/common.sh@14 -- # local bs=1048576 00:07:37.468 11:03:48 -- dd/common.sh@15 -- # local count=1 00:07:37.468 11:03:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:37.468 11:03:48 -- dd/common.sh@18 -- # gen_conf 00:07:37.468 11:03:48 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.468 11:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:37.468 [2024-12-06 11:03:48.525350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.468 [2024-12-06 11:03:48.526285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69767 ] 00:07:37.468 { 00:07:37.468 "subsystems": [ 00:07:37.468 { 00:07:37.468 "subsystem": "bdev", 00:07:37.468 "config": [ 00:07:37.468 { 00:07:37.468 "params": { 00:07:37.468 "trtype": "pcie", 00:07:37.468 "traddr": "0000:00:06.0", 00:07:37.468 "name": "Nvme0" 00:07:37.468 }, 00:07:37.468 "method": "bdev_nvme_attach_controller" 00:07:37.468 }, 00:07:37.468 { 00:07:37.468 "method": "bdev_wait_for_examine" 00:07:37.468 } 00:07:37.468 ] 00:07:37.468 } 00:07:37.468 ] 00:07:37.468 } 00:07:37.727 [2024-12-06 11:03:48.664091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.727 [2024-12-06 11:03:48.694459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.727  [2024-12-06T11:03:49.133Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:37.986 00:07:37.986 11:03:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.986 11:03:48 -- dd/basic_rw.sh@23 -- # count=15 00:07:37.986 11:03:48 -- dd/basic_rw.sh@24 -- # count=15 00:07:37.986 11:03:48 -- dd/basic_rw.sh@25 -- # size=61440 00:07:37.986 11:03:48 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:37.986 11:03:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:37.986 11:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:38.554 11:03:49 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:38.554 11:03:49 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.554 11:03:49 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.554 11:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.554 [2024-12-06 11:03:49.514433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.554 [2024-12-06 11:03:49.514524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69785 ] 00:07:38.554 { 00:07:38.554 "subsystems": [ 00:07:38.554 { 00:07:38.554 "subsystem": "bdev", 00:07:38.554 "config": [ 00:07:38.554 { 00:07:38.554 "params": { 00:07:38.554 "trtype": "pcie", 00:07:38.554 "traddr": "0000:00:06.0", 00:07:38.554 "name": "Nvme0" 00:07:38.554 }, 00:07:38.554 "method": "bdev_nvme_attach_controller" 00:07:38.554 }, 00:07:38.554 { 00:07:38.554 "method": "bdev_wait_for_examine" 00:07:38.554 } 00:07:38.554 ] 00:07:38.554 } 00:07:38.554 ] 00:07:38.554 } 00:07:38.554 [2024-12-06 11:03:49.652870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.554 [2024-12-06 11:03:49.685474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.813  [2024-12-06T11:03:49.960Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:38.813 00:07:39.072 11:03:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:39.072 11:03:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:39.072 11:03:49 -- dd/common.sh@31 -- # xtrace_disable 00:07:39.072 11:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:39.072 [2024-12-06 11:03:50.005930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.072 [2024-12-06 11:03:50.006208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69792 ] 00:07:39.072 { 00:07:39.072 "subsystems": [ 00:07:39.072 { 00:07:39.072 "subsystem": "bdev", 00:07:39.072 "config": [ 00:07:39.072 { 00:07:39.072 "params": { 00:07:39.072 "trtype": "pcie", 00:07:39.072 "traddr": "0000:00:06.0", 00:07:39.072 "name": "Nvme0" 00:07:39.072 }, 00:07:39.072 "method": "bdev_nvme_attach_controller" 00:07:39.072 }, 00:07:39.072 { 00:07:39.072 "method": "bdev_wait_for_examine" 00:07:39.072 } 00:07:39.072 ] 00:07:39.072 } 00:07:39.072 ] 00:07:39.072 } 00:07:39.072 [2024-12-06 11:03:50.144560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.072 [2024-12-06 11:03:50.175281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.331  [2024-12-06T11:03:50.478Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:39.331 00:07:39.331 11:03:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.331 11:03:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:39.331 11:03:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.331 11:03:50 -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.331 11:03:50 -- dd/common.sh@12 -- # local size=61440 00:07:39.331 11:03:50 -- dd/common.sh@14 -- # local bs=1048576 00:07:39.331 11:03:50 -- dd/common.sh@15 -- # local count=1 00:07:39.331 11:03:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.331 11:03:50 -- dd/common.sh@18 -- # gen_conf 00:07:39.331 11:03:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:39.331 11:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.590 [2024-12-06 11:03:50.490843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.590 [2024-12-06 11:03:50.490937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69810 ] 00:07:39.590 { 00:07:39.590 "subsystems": [ 00:07:39.590 { 00:07:39.590 "subsystem": "bdev", 00:07:39.590 "config": [ 00:07:39.590 { 00:07:39.590 "params": { 00:07:39.590 "trtype": "pcie", 00:07:39.590 "traddr": "0000:00:06.0", 00:07:39.590 "name": "Nvme0" 00:07:39.590 }, 00:07:39.590 "method": "bdev_nvme_attach_controller" 00:07:39.590 }, 00:07:39.590 { 00:07:39.590 "method": "bdev_wait_for_examine" 00:07:39.590 } 00:07:39.590 ] 00:07:39.590 } 00:07:39.590 ] 00:07:39.590 } 00:07:39.590 [2024-12-06 11:03:50.627861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.590 [2024-12-06 11:03:50.658421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.848  [2024-12-06T11:03:50.995Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.848 00:07:39.849 11:03:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:39.849 11:03:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.849 11:03:50 -- dd/basic_rw.sh@23 -- # count=7 00:07:39.849 11:03:50 -- dd/basic_rw.sh@24 -- # count=7 00:07:39.849 11:03:50 -- dd/basic_rw.sh@25 -- # size=57344 00:07:39.849 11:03:50 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:39.849 11:03:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:39.849 11:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:40.417 11:03:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:40.417 11:03:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.417 11:03:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.417 11:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.417 [2024-12-06 11:03:51.461076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.418 [2024-12-06 11:03:51.461350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69823 ] 00:07:40.418 { 00:07:40.418 "subsystems": [ 00:07:40.418 { 00:07:40.418 "subsystem": "bdev", 00:07:40.418 "config": [ 00:07:40.418 { 00:07:40.418 "params": { 00:07:40.418 "trtype": "pcie", 00:07:40.418 "traddr": "0000:00:06.0", 00:07:40.418 "name": "Nvme0" 00:07:40.418 }, 00:07:40.418 "method": "bdev_nvme_attach_controller" 00:07:40.418 }, 00:07:40.418 { 00:07:40.418 "method": "bdev_wait_for_examine" 00:07:40.418 } 00:07:40.418 ] 00:07:40.418 } 00:07:40.418 ] 00:07:40.418 } 00:07:40.676 [2024-12-06 11:03:51.599823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.676 [2024-12-06 11:03:51.634085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.676  [2024-12-06T11:03:52.082Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:40.935 00:07:40.935 11:03:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:40.935 11:03:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.935 11:03:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.935 11:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.935 [2024-12-06 11:03:51.937002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.935 [2024-12-06 11:03:51.937092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69836 ] 00:07:40.935 { 00:07:40.935 "subsystems": [ 00:07:40.935 { 00:07:40.935 "subsystem": "bdev", 00:07:40.935 "config": [ 00:07:40.935 { 00:07:40.935 "params": { 00:07:40.935 "trtype": "pcie", 00:07:40.935 "traddr": "0000:00:06.0", 00:07:40.935 "name": "Nvme0" 00:07:40.935 }, 00:07:40.935 "method": "bdev_nvme_attach_controller" 00:07:40.935 }, 00:07:40.935 { 00:07:40.935 "method": "bdev_wait_for_examine" 00:07:40.935 } 00:07:40.935 ] 00:07:40.935 } 00:07:40.935 ] 00:07:40.935 } 00:07:40.935 [2024-12-06 11:03:52.075214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.195 [2024-12-06 11:03:52.107174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.195  [2024-12-06T11:03:52.601Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:41.454 00:07:41.454 11:03:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.454 11:03:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:41.454 11:03:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.454 11:03:52 -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.454 11:03:52 -- dd/common.sh@12 -- # local size=57344 00:07:41.454 11:03:52 -- dd/common.sh@14 -- # local bs=1048576 00:07:41.454 11:03:52 -- dd/common.sh@15 -- # local count=1 00:07:41.454 11:03:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.454 11:03:52 -- dd/common.sh@18 -- # gen_conf 00:07:41.454 11:03:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:41.454 11:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.454 [2024-12-06 11:03:52.428429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.454 [2024-12-06 11:03:52.428513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69849 ] 00:07:41.454 { 00:07:41.454 "subsystems": [ 00:07:41.454 { 00:07:41.454 "subsystem": "bdev", 00:07:41.454 "config": [ 00:07:41.454 { 00:07:41.454 "params": { 00:07:41.454 "trtype": "pcie", 00:07:41.454 "traddr": "0000:00:06.0", 00:07:41.454 "name": "Nvme0" 00:07:41.454 }, 00:07:41.454 "method": "bdev_nvme_attach_controller" 00:07:41.454 }, 00:07:41.454 { 00:07:41.454 "method": "bdev_wait_for_examine" 00:07:41.454 } 00:07:41.454 ] 00:07:41.454 } 00:07:41.454 ] 00:07:41.454 } 00:07:41.454 [2024-12-06 11:03:52.567875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.454 [2024-12-06 11:03:52.598884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.713  [2024-12-06T11:03:53.119Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:41.972 00:07:41.972 11:03:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:41.972 11:03:52 -- dd/basic_rw.sh@23 -- # count=7 00:07:41.972 11:03:52 -- dd/basic_rw.sh@24 -- # count=7 00:07:41.972 11:03:52 -- dd/basic_rw.sh@25 -- # size=57344 00:07:41.972 11:03:52 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:41.972 11:03:52 -- dd/common.sh@98 -- # xtrace_disable 00:07:41.972 11:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:42.539 11:03:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:42.539 11:03:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.539 11:03:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.539 11:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.539 [2024-12-06 11:03:53.428045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.539 [2024-12-06 11:03:53.428140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69866 ] 00:07:42.539 { 00:07:42.539 "subsystems": [ 00:07:42.539 { 00:07:42.539 "subsystem": "bdev", 00:07:42.539 "config": [ 00:07:42.539 { 00:07:42.539 "params": { 00:07:42.539 "trtype": "pcie", 00:07:42.539 "traddr": "0000:00:06.0", 00:07:42.539 "name": "Nvme0" 00:07:42.539 }, 00:07:42.539 "method": "bdev_nvme_attach_controller" 00:07:42.539 }, 00:07:42.539 { 00:07:42.539 "method": "bdev_wait_for_examine" 00:07:42.539 } 00:07:42.539 ] 00:07:42.539 } 00:07:42.539 ] 00:07:42.539 } 00:07:42.539 [2024-12-06 11:03:53.562976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.539 [2024-12-06 11:03:53.594392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.797  [2024-12-06T11:03:53.944Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:42.797 00:07:42.797 11:03:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:42.797 11:03:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:42.797 11:03:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.797 11:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.797 [2024-12-06 11:03:53.902204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.797 [2024-12-06 11:03:53.902298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69880 ] 00:07:42.797 { 00:07:42.797 "subsystems": [ 00:07:42.797 { 00:07:42.797 "subsystem": "bdev", 00:07:42.797 "config": [ 00:07:42.797 { 00:07:42.797 "params": { 00:07:42.797 "trtype": "pcie", 00:07:42.797 "traddr": "0000:00:06.0", 00:07:42.797 "name": "Nvme0" 00:07:42.797 }, 00:07:42.797 "method": "bdev_nvme_attach_controller" 00:07:42.797 }, 00:07:42.797 { 00:07:42.797 "method": "bdev_wait_for_examine" 00:07:42.797 } 00:07:42.797 ] 00:07:42.797 } 00:07:42.797 ] 00:07:42.797 } 00:07:43.056 [2024-12-06 11:03:54.040248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.056 [2024-12-06 11:03:54.070502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.056  [2024-12-06T11:03:54.461Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.314 00:07:43.314 11:03:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.314 11:03:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:43.314 11:03:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.314 11:03:54 -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.314 11:03:54 -- dd/common.sh@12 -- # local size=57344 00:07:43.314 11:03:54 -- dd/common.sh@14 -- # local bs=1048576 00:07:43.314 11:03:54 -- dd/common.sh@15 -- # local count=1 00:07:43.314 11:03:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.314 11:03:54 -- dd/common.sh@18 -- # gen_conf 00:07:43.314 11:03:54 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.314 11:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:43.314 [2024-12-06 11:03:54.386707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.314 [2024-12-06 11:03:54.386823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:07:43.314 { 00:07:43.314 "subsystems": [ 00:07:43.314 { 00:07:43.314 "subsystem": "bdev", 00:07:43.314 "config": [ 00:07:43.314 { 00:07:43.314 "params": { 00:07:43.314 "trtype": "pcie", 00:07:43.314 "traddr": "0000:00:06.0", 00:07:43.314 "name": "Nvme0" 00:07:43.314 }, 00:07:43.314 "method": "bdev_nvme_attach_controller" 00:07:43.314 }, 00:07:43.314 { 00:07:43.314 "method": "bdev_wait_for_examine" 00:07:43.314 } 00:07:43.314 ] 00:07:43.314 } 00:07:43.314 ] 00:07:43.314 } 00:07:43.573 [2024-12-06 11:03:54.524585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.573 [2024-12-06 11:03:54.558668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.573  [2024-12-06T11:03:54.979Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.832 00:07:43.832 11:03:54 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:43.832 11:03:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:43.832 11:03:54 -- dd/basic_rw.sh@23 -- # count=3 00:07:43.832 11:03:54 -- dd/basic_rw.sh@24 -- # count=3 00:07:43.832 11:03:54 -- dd/basic_rw.sh@25 -- # size=49152 00:07:43.832 11:03:54 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:43.832 11:03:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:43.832 11:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:44.091 11:03:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:44.091 11:03:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:44.091 11:03:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.091 11:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.350 [2024-12-06 11:03:55.270531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.350 [2024-12-06 11:03:55.270692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69906 ] 00:07:44.350 { 00:07:44.350 "subsystems": [ 00:07:44.350 { 00:07:44.350 "subsystem": "bdev", 00:07:44.350 "config": [ 00:07:44.350 { 00:07:44.350 "params": { 00:07:44.350 "trtype": "pcie", 00:07:44.350 "traddr": "0000:00:06.0", 00:07:44.350 "name": "Nvme0" 00:07:44.350 }, 00:07:44.350 "method": "bdev_nvme_attach_controller" 00:07:44.350 }, 00:07:44.350 { 00:07:44.350 "method": "bdev_wait_for_examine" 00:07:44.350 } 00:07:44.350 ] 00:07:44.350 } 00:07:44.350 ] 00:07:44.350 } 00:07:44.350 [2024-12-06 11:03:55.415521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.350 [2024-12-06 11:03:55.446597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.608  [2024-12-06T11:03:55.755Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:44.608 00:07:44.608 11:03:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:44.608 11:03:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:44.608 11:03:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.608 11:03:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.867 [2024-12-06 11:03:55.759852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.867 [2024-12-06 11:03:55.759954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69924 ] 00:07:44.867 { 00:07:44.867 "subsystems": [ 00:07:44.867 { 00:07:44.867 "subsystem": "bdev", 00:07:44.867 "config": [ 00:07:44.867 { 00:07:44.867 "params": { 00:07:44.867 "trtype": "pcie", 00:07:44.867 "traddr": "0000:00:06.0", 00:07:44.867 "name": "Nvme0" 00:07:44.867 }, 00:07:44.867 "method": "bdev_nvme_attach_controller" 00:07:44.867 }, 00:07:44.867 { 00:07:44.867 "method": "bdev_wait_for_examine" 00:07:44.867 } 00:07:44.867 ] 00:07:44.867 } 00:07:44.867 ] 00:07:44.867 } 00:07:44.867 [2024-12-06 11:03:55.897359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.867 [2024-12-06 11:03:55.927791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.126  [2024-12-06T11:03:56.273Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:45.126 00:07:45.126 11:03:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.126 11:03:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:45.126 11:03:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.126 11:03:56 -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.126 11:03:56 -- dd/common.sh@12 -- # local size=49152 00:07:45.126 11:03:56 -- dd/common.sh@14 -- # local bs=1048576 00:07:45.126 11:03:56 -- dd/common.sh@15 -- # local count=1 00:07:45.126 11:03:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.126 11:03:56 -- dd/common.sh@18 -- # gen_conf 00:07:45.126 11:03:56 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.126 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.126 [2024-12-06 11:03:56.239356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.126 [2024-12-06 11:03:56.239458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69932 ] 00:07:45.126 { 00:07:45.126 "subsystems": [ 00:07:45.126 { 00:07:45.126 "subsystem": "bdev", 00:07:45.126 "config": [ 00:07:45.126 { 00:07:45.126 "params": { 00:07:45.126 "trtype": "pcie", 00:07:45.126 "traddr": "0000:00:06.0", 00:07:45.126 "name": "Nvme0" 00:07:45.126 }, 00:07:45.126 "method": "bdev_nvme_attach_controller" 00:07:45.126 }, 00:07:45.126 { 00:07:45.126 "method": "bdev_wait_for_examine" 00:07:45.126 } 00:07:45.126 ] 00:07:45.126 } 00:07:45.126 ] 00:07:45.126 } 00:07:45.386 [2024-12-06 11:03:56.376987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.386 [2024-12-06 11:03:56.407869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.386  [2024-12-06T11:03:56.791Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:45.644 00:07:45.644 11:03:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:45.644 11:03:56 -- dd/basic_rw.sh@23 -- # count=3 00:07:45.644 11:03:56 -- dd/basic_rw.sh@24 -- # count=3 00:07:45.644 11:03:56 -- dd/basic_rw.sh@25 -- # size=49152 00:07:45.644 11:03:56 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:45.644 11:03:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:45.644 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 11:03:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:46.211 11:03:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:46.211 11:03:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.211 11:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 [2024-12-06 11:03:57.125263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.211 [2024-12-06 11:03:57.125370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69950 ] 00:07:46.211 { 00:07:46.211 "subsystems": [ 00:07:46.211 { 00:07:46.211 "subsystem": "bdev", 00:07:46.211 "config": [ 00:07:46.211 { 00:07:46.211 "params": { 00:07:46.211 "trtype": "pcie", 00:07:46.211 "traddr": "0000:00:06.0", 00:07:46.211 "name": "Nvme0" 00:07:46.211 }, 00:07:46.211 "method": "bdev_nvme_attach_controller" 00:07:46.211 }, 00:07:46.211 { 00:07:46.211 "method": "bdev_wait_for_examine" 00:07:46.211 } 00:07:46.211 ] 00:07:46.211 } 00:07:46.211 ] 00:07:46.211 } 00:07:46.211 [2024-12-06 11:03:57.263877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.211 [2024-12-06 11:03:57.296231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.469  [2024-12-06T11:03:57.616Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.469 00:07:46.469 11:03:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:46.469 11:03:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.469 11:03:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.469 11:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:46.469 [2024-12-06 11:03:57.609605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.469 [2024-12-06 11:03:57.609713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69959 ] 00:07:46.727 { 00:07:46.727 "subsystems": [ 00:07:46.727 { 00:07:46.727 "subsystem": "bdev", 00:07:46.727 "config": [ 00:07:46.727 { 00:07:46.727 "params": { 00:07:46.727 "trtype": "pcie", 00:07:46.727 "traddr": "0000:00:06.0", 00:07:46.727 "name": "Nvme0" 00:07:46.727 }, 00:07:46.727 "method": "bdev_nvme_attach_controller" 00:07:46.727 }, 00:07:46.727 { 00:07:46.727 "method": "bdev_wait_for_examine" 00:07:46.727 } 00:07:46.727 ] 00:07:46.727 } 00:07:46.727 ] 00:07:46.727 } 00:07:46.727 [2024-12-06 11:03:57.744695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.727 [2024-12-06 11:03:57.775407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.985  [2024-12-06T11:03:58.132Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.985 00:07:46.985 11:03:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.985 11:03:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:46.985 11:03:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.985 11:03:58 -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.985 11:03:58 -- dd/common.sh@12 -- # local size=49152 00:07:46.985 11:03:58 -- dd/common.sh@14 -- # local bs=1048576 00:07:46.985 11:03:58 -- dd/common.sh@15 -- # local count=1 00:07:46.986 11:03:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.986 11:03:58 -- dd/common.sh@18 -- # gen_conf 00:07:46.986 11:03:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.986 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:46.986 [2024-12-06 11:03:58.092069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.986 [2024-12-06 11:03:58.092168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69976 ] 00:07:46.986 { 00:07:46.986 "subsystems": [ 00:07:46.986 { 00:07:46.986 "subsystem": "bdev", 00:07:46.986 "config": [ 00:07:46.986 { 00:07:46.986 "params": { 00:07:46.986 "trtype": "pcie", 00:07:46.986 "traddr": "0000:00:06.0", 00:07:46.986 "name": "Nvme0" 00:07:46.986 }, 00:07:46.986 "method": "bdev_nvme_attach_controller" 00:07:46.986 }, 00:07:46.986 { 00:07:46.986 "method": "bdev_wait_for_examine" 00:07:46.986 } 00:07:46.986 ] 00:07:46.986 } 00:07:46.986 ] 00:07:46.986 } 00:07:47.283 [2024-12-06 11:03:58.231185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.283 [2024-12-06 11:03:58.261352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.283  [2024-12-06T11:03:58.689Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.542 00:07:47.542 00:07:47.542 real 0m11.564s 00:07:47.542 user 0m8.440s 00:07:47.542 sys 0m2.053s 00:07:47.542 11:03:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.542 ************************************ 00:07:47.542 END TEST dd_rw 00:07:47.542 ************************************ 00:07:47.542 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:47.542 11:03:58 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:47.542 11:03:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.542 11:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.542 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:47.542 ************************************ 00:07:47.542 START TEST dd_rw_offset 00:07:47.542 ************************************ 00:07:47.542 11:03:58 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:47.542 11:03:58 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:47.542 11:03:58 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:47.542 11:03:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:47.542 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:47.542 11:03:58 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:47.542 11:03:58 -- dd/basic_rw.sh@56 -- # data=r5y35cetqydpekxz5qtd2ux5zfa07ta0vaqwy38ma687e6wjn0rev5m712o2peyqtqasho8rllg8rzq2hydnbhuh9pla2qzdet7dhnvd37i3t9spi8bxt7vpwmqbvybg02t5kfxsgjg3t13s3uw3yspgxv00tm1upch1wxs83ee9c4bqfkwx3il6y3un57srqqti74aof33qf8ntv4hm5e700x0g3ilwj4jecic1yn19nqz8jyq2ttgw4i47qvtdojmytyeyc830247cmrd5qh49a2clybtepx6xvcrstzecbbid7hv6dmp0fw4hiyj88g272m4kke4akmnxm8lnrey0049umkbqh1vxxyvsfcz2j19z52rmkvfwwe3dak8kcnhjqidgcf8ijx2vqfssunulytut7c5xvgmsw61enq27xxesz735pdxoq3eng6a9q8bvrhnsviaagb9vr0ikisxygkf2dgfo07a0eg9j3kd21ndhe28fabfuqep5pyemzdxj2buhsk1bca3i5fqeid80p6rfbnjal6xtk9qwhbd2haca702lhh7i8yviovkkdnn2qvzv3dqpb25p4g53ydqo3wvzp7qahz0cviveeglbbn2v6dfyzvoav6reofdr1dt61qw1ocr45xblb3uixqlx4fkqjif9fqu10yysj2me21dpl2x9syh262cdux7py9cvo03wsan6h4geaq797e1xfjww9ujorjl6qxrnw6ibkly4rcs70s37o7qj2keayvc7eftlvi8o2fumkj12ejb6xl85dubox4ksl52qqz5q0pni2ynd4iw6ip6ctkwp0lgeom8l27aqm6bc913k2ksvln4x3rd0fmr5ukqbb9xanbrh4joe5flhcavb1s97ihc0gxx592g49q91eu211k6mvfjoboyhyfh3mj1dme690bc14jkw4evj4y8ziw3ahr5cxj3vrorup4wob45su9f6wjtwo9o4txnhb8dj842qpr7y4vvuaiqsz3qhdrmd91x2nw7qye3pt50n3i04ihurmnufh9xtn4rk3gd9tgxunyqnqr6pvj43alc3huvvg5ex53t24pyebbtfxfu5p8cb2yqubyh9dtk7m2bd6nk3zoy6llbtetifg9g0uh1rjuawmbcqtp14dz5l2307wh03gexh1if4sh2ej67us8tvuo0la6u6ludk00l8g4a85kts7uu8oxla8uiijd5f1gajm0w0ks5d8f667132ai6uo5m623ae1fch8eheda0em4obp8elpbz1w3faphomk5yzklqt2yu5zuwbjt0ue8xw9a66hlt4oas7pz7i9ilgpwdbc4mys5acicy6n5k9t4gz3mvtmr76o0vx689dhpnmtvlvjjpe7ab2uccpuhohupi9ll6m2slikun4i8sfp5g3gjusinrhndx01t7g1gbygzpri9gmrwkk8kbszbqmlcj1srl5qslg3ebrms8dr3ryvjqrfwye4raclr0tpumx4k44flm2qdlmls6wi1dw79fqj97hd2lr6y7zxhi7cttn1p0hh0er85c1wc6zeppjfxe1tr8be2i25llviqi9y63z6x0n0tap14ojowdq6x891736sfyxzkwhpkgqzo7xrz3ko149cuofrzys37kr20moq8jotq2zlhp5t53a5fgcgc9oryb5m8ekfp4863aczpclecdljl5s4mjah63atd43zie9l557325jn0k6ce0dwz9yc9tzfv5ibtsd0oqly073uoq66rrr1doxayct3uc8evpndyenjh61c56nohlthnxtu03nfau1a8na3evyz4pshz2e2gu8g1jfaf19ks6kgzzh80azcgsmc704pjcaoogmd1uaps7nszxysf39gve6nsujztulq7j87hjb0uvk6kz222lp24yipdnb2pc2cjwpjc1yr7lkgtul1suzjzrrbcsk7qe24vbk9vmlcm7pr0r8t4lbz3q6jjunl2ag164i403hz6s8yzk0kwkk52dmbqle8kq4m4jxdry54x6ffiwo6g6lf37tx6w38kgljl8am3zxkp5eb59mahpoesg2zjss8gacl2is5phxiwvsuwr6pvtzdkigjcwkkdalk9unjua5pwif50n3c3v6cznaadc2e4tbtju9bgdz941tscfdl5tnuddwffut7futajlzj2klxw0u9s8l38gtyudwioiv6isw0vc8xu6qeeix2tquofb94w0z43krz6t4tm1uml3co86by3dm8zn6xrbycuqn3its8q9qtm4wx8mn0ulldp1ge3weragsvu626x6vhr56ancqtrapuhvi7f6uw8wwhzlnuwu1p33pj9gk2ar9ufegycs3v66c5qqqo34o6hdv12c9blavuxbv0crmoy47wqb6aa9kvt1kwtkk9xivzwj3we850uashp38tlyq2hc5f3x74770mqekai334sc10x3b3o23aiz8ohuhscw5iljeci5h9xjq7efamad32h2ufvtee87lvwu2gfa98dcmqe3pb3u9hmzmi1muogpy7sq4ok1txgj112d9fn5zsgal0x5j9q0d6xr2somtc7hyvcm4jdzxu65zvnwi85uuscx3q476z075abf6woitmkjabmb61h3jd1i5q79h2sfnspit5c2glsfj7nqfqyua4suqeaem0xamq25wwfu9ero8w6m6zlksj6dprpvkthg0w1o1l91ldph98e76u11qalwrrlzna8vkurxbhamkri0ozvablmdcv0mszl18nsi3lwo21uv1frbzr1jb2kcohtsq38xkwwn4p9406cct58eno613gkqq7cynq230w2pfaq496at89wcff0ys7a3oj89l0op27ev79uw3nzf9pg3qhovlcts8nrplipmsdmz76ap35924g96kp301z9h5ffsgmdlcwp62iia4gjwsn9nu8ediudflz3ao5c7571kv9h4ciykqef3x9obg6bu7go7mjq8yjlx5hydokija3w4j40eoblwj911cdcnxj263d7su6ldt26t2u35v35c9d7yp8qzmxx04qd8w2wmrac2vuv83jms7s2yvdn6zbwydy30xdq3vzhgimskm6wctl62orgexax9y8jl695w7torbuyeie01q4p78xm2pmorfiuq53sjahwla72b8x94lpmu8w0bqn33ce6k11jycdar0kpcsmbuf33taedef02plij5rcl8igqdofqfn7ru96xsaru2v6batam5z7ei846t8ucgrt87ubxcuofoqt7y9xuh4cuv19abaysr2d8cz1k6umk4b3i90l9r5d5dk252ukgu6z6qjoeusqxbton1zcu7se2rae5j9mk2wyeos2j4vu0e5jmp6aac15azxpne975j748ge8p2m7114w5j4t5v8zisrmcxgyultp707z1d6s9h3lf0o00bqd2u0i2a7hdt23k1dmte3l08oqy6r86kqjxn00f9aj12kneip4py2z6lq32ebd6ehsydf41rq35gb0ulxmllpag51yrznqtvlszoey4wcppiunxhfg0dv3n9e4wneide8fpusjtdni5b78iwwfur20hs6nsk7a3iugr7quk7mbidmklsyhzrn9mvbdi7ftjemoi9882glpsp91qriudylpqk7mg4lxxbm4fgnn3dl9d543nrs4ud3jny2f0di4iqf3ul9youdtn3kshlihiztbkrciuy7vzd9zftbt6et84s9ruek1giganyv1in6u9yqlotnbagavcj53bx1qaaim4ui50gbyis7xeqf5vwu2gky02lst3u7j7zeimgmkgfee75t0wfgi13dfpnq97n0buqmvsm68mqzu8xm9biemf954siu2vvn5fyi2pi008edkrwltlj0q7efe3mp1m2nq7ixwarpokpdyuima7jft4fz8ijijx1podest71h7r8fk4zccc9ompw8shw4ryed1ro2rn9kbbdwg0sa0t34bl6gjbhkftho02789ipolwmaiytwjrkq1jgo24hrxqk7oarngrxdaba08rgtu9p0 00:07:47.542 11:03:58 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:47.542 11:03:58 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:47.542 11:03:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:47.542 11:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:47.542 [2024-12-06 11:03:58.672521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.543 [2024-12-06 11:03:58.672641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70010 ] 00:07:47.543 { 00:07:47.543 "subsystems": [ 00:07:47.543 { 00:07:47.543 "subsystem": "bdev", 00:07:47.543 "config": [ 00:07:47.543 { 00:07:47.543 "params": { 00:07:47.543 "trtype": "pcie", 00:07:47.543 "traddr": "0000:00:06.0", 00:07:47.543 "name": "Nvme0" 00:07:47.543 }, 00:07:47.543 "method": "bdev_nvme_attach_controller" 00:07:47.543 }, 00:07:47.543 { 00:07:47.543 "method": "bdev_wait_for_examine" 00:07:47.543 } 00:07:47.543 ] 00:07:47.543 } 00:07:47.543 ] 00:07:47.543 } 00:07:47.800 [2024-12-06 11:03:58.811969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.800 [2024-12-06 11:03:58.842875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.058  [2024-12-06T11:03:59.205Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:48.058 00:07:48.058 11:03:59 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:48.058 11:03:59 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:48.058 11:03:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:48.058 11:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.058 [2024-12-06 11:03:59.155731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.058 [2024-12-06 11:03:59.155828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70018 ] 00:07:48.058 { 00:07:48.058 "subsystems": [ 00:07:48.058 { 00:07:48.058 "subsystem": "bdev", 00:07:48.058 "config": [ 00:07:48.058 { 00:07:48.058 "params": { 00:07:48.058 "trtype": "pcie", 00:07:48.058 "traddr": "0000:00:06.0", 00:07:48.058 "name": "Nvme0" 00:07:48.058 }, 00:07:48.058 "method": "bdev_nvme_attach_controller" 00:07:48.058 }, 00:07:48.058 { 00:07:48.058 "method": "bdev_wait_for_examine" 00:07:48.058 } 00:07:48.058 ] 00:07:48.058 } 00:07:48.058 ] 00:07:48.058 } 00:07:48.317 [2024-12-06 11:03:59.293059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.317 [2024-12-06 11:03:59.327502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.317  [2024-12-06T11:03:59.725Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:48.578 00:07:48.578 11:03:59 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:48.578 11:03:59 -- dd/basic_rw.sh@72 -- # [[ r5y35cetqydpekxz5qtd2ux5zfa07ta0vaqwy38ma687e6wjn0rev5m712o2peyqtqasho8rllg8rzq2hydnbhuh9pla2qzdet7dhnvd37i3t9spi8bxt7vpwmqbvybg02t5kfxsgjg3t13s3uw3yspgxv00tm1upch1wxs83ee9c4bqfkwx3il6y3un57srqqti74aof33qf8ntv4hm5e700x0g3ilwj4jecic1yn19nqz8jyq2ttgw4i47qvtdojmytyeyc830247cmrd5qh49a2clybtepx6xvcrstzecbbid7hv6dmp0fw4hiyj88g272m4kke4akmnxm8lnrey0049umkbqh1vxxyvsfcz2j19z52rmkvfwwe3dak8kcnhjqidgcf8ijx2vqfssunulytut7c5xvgmsw61enq27xxesz735pdxoq3eng6a9q8bvrhnsviaagb9vr0ikisxygkf2dgfo07a0eg9j3kd21ndhe28fabfuqep5pyemzdxj2buhsk1bca3i5fqeid80p6rfbnjal6xtk9qwhbd2haca702lhh7i8yviovkkdnn2qvzv3dqpb25p4g53ydqo3wvzp7qahz0cviveeglbbn2v6dfyzvoav6reofdr1dt61qw1ocr45xblb3uixqlx4fkqjif9fqu10yysj2me21dpl2x9syh262cdux7py9cvo03wsan6h4geaq797e1xfjww9ujorjl6qxrnw6ibkly4rcs70s37o7qj2keayvc7eftlvi8o2fumkj12ejb6xl85dubox4ksl52qqz5q0pni2ynd4iw6ip6ctkwp0lgeom8l27aqm6bc913k2ksvln4x3rd0fmr5ukqbb9xanbrh4joe5flhcavb1s97ihc0gxx592g49q91eu211k6mvfjoboyhyfh3mj1dme690bc14jkw4evj4y8ziw3ahr5cxj3vrorup4wob45su9f6wjtwo9o4txnhb8dj842qpr7y4vvuaiqsz3qhdrmd91x2nw7qye3pt50n3i04ihurmnufh9xtn4rk3gd9tgxunyqnqr6pvj43alc3huvvg5ex53t24pyebbtfxfu5p8cb2yqubyh9dtk7m2bd6nk3zoy6llbtetifg9g0uh1rjuawmbcqtp14dz5l2307wh03gexh1if4sh2ej67us8tvuo0la6u6ludk00l8g4a85kts7uu8oxla8uiijd5f1gajm0w0ks5d8f667132ai6uo5m623ae1fch8eheda0em4obp8elpbz1w3faphomk5yzklqt2yu5zuwbjt0ue8xw9a66hlt4oas7pz7i9ilgpwdbc4mys5acicy6n5k9t4gz3mvtmr76o0vx689dhpnmtvlvjjpe7ab2uccpuhohupi9ll6m2slikun4i8sfp5g3gjusinrhndx01t7g1gbygzpri9gmrwkk8kbszbqmlcj1srl5qslg3ebrms8dr3ryvjqrfwye4raclr0tpumx4k44flm2qdlmls6wi1dw79fqj97hd2lr6y7zxhi7cttn1p0hh0er85c1wc6zeppjfxe1tr8be2i25llviqi9y63z6x0n0tap14ojowdq6x891736sfyxzkwhpkgqzo7xrz3ko149cuofrzys37kr20moq8jotq2zlhp5t53a5fgcgc9oryb5m8ekfp4863aczpclecdljl5s4mjah63atd43zie9l557325jn0k6ce0dwz9yc9tzfv5ibtsd0oqly073uoq66rrr1doxayct3uc8evpndyenjh61c56nohlthnxtu03nfau1a8na3evyz4pshz2e2gu8g1jfaf19ks6kgzzh80azcgsmc704pjcaoogmd1uaps7nszxysf39gve6nsujztulq7j87hjb0uvk6kz222lp24yipdnb2pc2cjwpjc1yr7lkgtul1suzjzrrbcsk7qe24vbk9vmlcm7pr0r8t4lbz3q6jjunl2ag164i403hz6s8yzk0kwkk52dmbqle8kq4m4jxdry54x6ffiwo6g6lf37tx6w38kgljl8am3zxkp5eb59mahpoesg2zjss8gacl2is5phxiwvsuwr6pvtzdkigjcwkkdalk9unjua5pwif50n3c3v6cznaadc2e4tbtju9bgdz941tscfdl5tnuddwffut7futajlzj2klxw0u9s8l38gtyudwioiv6isw0vc8xu6qeeix2tquofb94w0z43krz6t4tm1uml3co86by3dm8zn6xrbycuqn3its8q9qtm4wx8mn0ulldp1ge3weragsvu626x6vhr56ancqtrapuhvi7f6uw8wwhzlnuwu1p33pj9gk2ar9ufegycs3v66c5qqqo34o6hdv12c9blavuxbv0crmoy47wqb6aa9kvt1kwtkk9xivzwj3we850uashp38tlyq2hc5f3x74770mqekai334sc10x3b3o23aiz8ohuhscw5iljeci5h9xjq7efamad32h2ufvtee87lvwu2gfa98dcmqe3pb3u9hmzmi1muogpy7sq4ok1txgj112d9fn5zsgal0x5j9q0d6xr2somtc7hyvcm4jdzxu65zvnwi85uuscx3q476z075abf6woitmkjabmb61h3jd1i5q79h2sfnspit5c2glsfj7nqfqyua4suqeaem0xamq25wwfu9ero8w6m6zlksj6dprpvkthg0w1o1l91ldph98e76u11qalwrrlzna8vkurxbhamkri0ozvablmdcv0mszl18nsi3lwo21uv1frbzr1jb2kcohtsq38xkwwn4p9406cct58eno613gkqq7cynq230w2pfaq496at89wcff0ys7a3oj89l0op27ev79uw3nzf9pg3qhovlcts8nrplipmsdmz76ap35924g96kp301z9h5ffsgmdlcwp62iia4gjwsn9nu8ediudflz3ao5c7571kv9h4ciykqef3x9obg6bu7go7mjq8yjlx5hydokija3w4j40eoblwj911cdcnxj263d7su6ldt26t2u35v35c9d7yp8qzmxx04qd8w2wmrac2vuv83jms7s2yvdn6zbwydy30xdq3vzhgimskm6wctl62orgexax9y8jl695w7torbuyeie01q4p78xm2pmorfiuq53sjahwla72b8x94lpmu8w0bqn33ce6k11jycdar0kpcsmbuf33taedef02plij5rcl8igqdofqfn7ru96xsaru2v6batam5z7ei846t8ucgrt87ubxcuofoqt7y9xuh4cuv19abaysr2d8cz1k6umk4b3i90l9r5d5dk252ukgu6z6qjoeusqxbton1zcu7se2rae5j9mk2wyeos2j4vu0e5jmp6aac15azxpne975j748ge8p2m7114w5j4t5v8zisrmcxgyultp707z1d6s9h3lf0o00bqd2u0i2a7hdt23k1dmte3l08oqy6r86kqjxn00f9aj12kneip4py2z6lq32ebd6ehsydf41rq35gb0ulxmllpag51yrznqtvlszoey4wcppiunxhfg0dv3n9e4wneide8fpusjtdni5b78iwwfur20hs6nsk7a3iugr7quk7mbidmklsyhzrn9mvbdi7ftjemoi9882glpsp91qriudylpqk7mg4lxxbm4fgnn3dl9d543nrs4ud3jny2f0di4iqf3ul9youdtn3kshlihiztbkrciuy7vzd9zftbt6et84s9ruek1giganyv1in6u9yqlotnbagavcj53bx1qaaim4ui50gbyis7xeqf5vwu2gky02lst3u7j7zeimgmkgfee75t0wfgi13dfpnq97n0buqmvsm68mqzu8xm9biemf954siu2vvn5fyi2pi008edkrwltlj0q7efe3mp1m2nq7ixwarpokpdyuima7jft4fz8ijijx1podest71h7r8fk4zccc9ompw8shw4ryed1ro2rn9kbbdwg0sa0t34bl6gjbhkftho02789ipolwmaiytwjrkq1jgo24hrxqk7oarngrxdaba08rgtu9p0 == \r\5\y\3\5\c\e\t\q\y\d\p\e\k\x\z\5\q\t\d\2\u\x\5\z\f\a\0\7\t\a\0\v\a\q\w\y\3\8\m\a\6\8\7\e\6\w\j\n\0\r\e\v\5\m\7\1\2\o\2\p\e\y\q\t\q\a\s\h\o\8\r\l\l\g\8\r\z\q\2\h\y\d\n\b\h\u\h\9\p\l\a\2\q\z\d\e\t\7\d\h\n\v\d\3\7\i\3\t\9\s\p\i\8\b\x\t\7\v\p\w\m\q\b\v\y\b\g\0\2\t\5\k\f\x\s\g\j\g\3\t\1\3\s\3\u\w\3\y\s\p\g\x\v\0\0\t\m\1\u\p\c\h\1\w\x\s\8\3\e\e\9\c\4\b\q\f\k\w\x\3\i\l\6\y\3\u\n\5\7\s\r\q\q\t\i\7\4\a\o\f\3\3\q\f\8\n\t\v\4\h\m\5\e\7\0\0\x\0\g\3\i\l\w\j\4\j\e\c\i\c\1\y\n\1\9\n\q\z\8\j\y\q\2\t\t\g\w\4\i\4\7\q\v\t\d\o\j\m\y\t\y\e\y\c\8\3\0\2\4\7\c\m\r\d\5\q\h\4\9\a\2\c\l\y\b\t\e\p\x\6\x\v\c\r\s\t\z\e\c\b\b\i\d\7\h\v\6\d\m\p\0\f\w\4\h\i\y\j\8\8\g\2\7\2\m\4\k\k\e\4\a\k\m\n\x\m\8\l\n\r\e\y\0\0\4\9\u\m\k\b\q\h\1\v\x\x\y\v\s\f\c\z\2\j\1\9\z\5\2\r\m\k\v\f\w\w\e\3\d\a\k\8\k\c\n\h\j\q\i\d\g\c\f\8\i\j\x\2\v\q\f\s\s\u\n\u\l\y\t\u\t\7\c\5\x\v\g\m\s\w\6\1\e\n\q\2\7\x\x\e\s\z\7\3\5\p\d\x\o\q\3\e\n\g\6\a\9\q\8\b\v\r\h\n\s\v\i\a\a\g\b\9\v\r\0\i\k\i\s\x\y\g\k\f\2\d\g\f\o\0\7\a\0\e\g\9\j\3\k\d\2\1\n\d\h\e\2\8\f\a\b\f\u\q\e\p\5\p\y\e\m\z\d\x\j\2\b\u\h\s\k\1\b\c\a\3\i\5\f\q\e\i\d\8\0\p\6\r\f\b\n\j\a\l\6\x\t\k\9\q\w\h\b\d\2\h\a\c\a\7\0\2\l\h\h\7\i\8\y\v\i\o\v\k\k\d\n\n\2\q\v\z\v\3\d\q\p\b\2\5\p\4\g\5\3\y\d\q\o\3\w\v\z\p\7\q\a\h\z\0\c\v\i\v\e\e\g\l\b\b\n\2\v\6\d\f\y\z\v\o\a\v\6\r\e\o\f\d\r\1\d\t\6\1\q\w\1\o\c\r\4\5\x\b\l\b\3\u\i\x\q\l\x\4\f\k\q\j\i\f\9\f\q\u\1\0\y\y\s\j\2\m\e\2\1\d\p\l\2\x\9\s\y\h\2\6\2\c\d\u\x\7\p\y\9\c\v\o\0\3\w\s\a\n\6\h\4\g\e\a\q\7\9\7\e\1\x\f\j\w\w\9\u\j\o\r\j\l\6\q\x\r\n\w\6\i\b\k\l\y\4\r\c\s\7\0\s\3\7\o\7\q\j\2\k\e\a\y\v\c\7\e\f\t\l\v\i\8\o\2\f\u\m\k\j\1\2\e\j\b\6\x\l\8\5\d\u\b\o\x\4\k\s\l\5\2\q\q\z\5\q\0\p\n\i\2\y\n\d\4\i\w\6\i\p\6\c\t\k\w\p\0\l\g\e\o\m\8\l\2\7\a\q\m\6\b\c\9\1\3\k\2\k\s\v\l\n\4\x\3\r\d\0\f\m\r\5\u\k\q\b\b\9\x\a\n\b\r\h\4\j\o\e\5\f\l\h\c\a\v\b\1\s\9\7\i\h\c\0\g\x\x\5\9\2\g\4\9\q\9\1\e\u\2\1\1\k\6\m\v\f\j\o\b\o\y\h\y\f\h\3\m\j\1\d\m\e\6\9\0\b\c\1\4\j\k\w\4\e\v\j\4\y\8\z\i\w\3\a\h\r\5\c\x\j\3\v\r\o\r\u\p\4\w\o\b\4\5\s\u\9\f\6\w\j\t\w\o\9\o\4\t\x\n\h\b\8\d\j\8\4\2\q\p\r\7\y\4\v\v\u\a\i\q\s\z\3\q\h\d\r\m\d\9\1\x\2\n\w\7\q\y\e\3\p\t\5\0\n\3\i\0\4\i\h\u\r\m\n\u\f\h\9\x\t\n\4\r\k\3\g\d\9\t\g\x\u\n\y\q\n\q\r\6\p\v\j\4\3\a\l\c\3\h\u\v\v\g\5\e\x\5\3\t\2\4\p\y\e\b\b\t\f\x\f\u\5\p\8\c\b\2\y\q\u\b\y\h\9\d\t\k\7\m\2\b\d\6\n\k\3\z\o\y\6\l\l\b\t\e\t\i\f\g\9\g\0\u\h\1\r\j\u\a\w\m\b\c\q\t\p\1\4\d\z\5\l\2\3\0\7\w\h\0\3\g\e\x\h\1\i\f\4\s\h\2\e\j\6\7\u\s\8\t\v\u\o\0\l\a\6\u\6\l\u\d\k\0\0\l\8\g\4\a\8\5\k\t\s\7\u\u\8\o\x\l\a\8\u\i\i\j\d\5\f\1\g\a\j\m\0\w\0\k\s\5\d\8\f\6\6\7\1\3\2\a\i\6\u\o\5\m\6\2\3\a\e\1\f\c\h\8\e\h\e\d\a\0\e\m\4\o\b\p\8\e\l\p\b\z\1\w\3\f\a\p\h\o\m\k\5\y\z\k\l\q\t\2\y\u\5\z\u\w\b\j\t\0\u\e\8\x\w\9\a\6\6\h\l\t\4\o\a\s\7\p\z\7\i\9\i\l\g\p\w\d\b\c\4\m\y\s\5\a\c\i\c\y\6\n\5\k\9\t\4\g\z\3\m\v\t\m\r\7\6\o\0\v\x\6\8\9\d\h\p\n\m\t\v\l\v\j\j\p\e\7\a\b\2\u\c\c\p\u\h\o\h\u\p\i\9\l\l\6\m\2\s\l\i\k\u\n\4\i\8\s\f\p\5\g\3\g\j\u\s\i\n\r\h\n\d\x\0\1\t\7\g\1\g\b\y\g\z\p\r\i\9\g\m\r\w\k\k\8\k\b\s\z\b\q\m\l\c\j\1\s\r\l\5\q\s\l\g\3\e\b\r\m\s\8\d\r\3\r\y\v\j\q\r\f\w\y\e\4\r\a\c\l\r\0\t\p\u\m\x\4\k\4\4\f\l\m\2\q\d\l\m\l\s\6\w\i\1\d\w\7\9\f\q\j\9\7\h\d\2\l\r\6\y\7\z\x\h\i\7\c\t\t\n\1\p\0\h\h\0\e\r\8\5\c\1\w\c\6\z\e\p\p\j\f\x\e\1\t\r\8\b\e\2\i\2\5\l\l\v\i\q\i\9\y\6\3\z\6\x\0\n\0\t\a\p\1\4\o\j\o\w\d\q\6\x\8\9\1\7\3\6\s\f\y\x\z\k\w\h\p\k\g\q\z\o\7\x\r\z\3\k\o\1\4\9\c\u\o\f\r\z\y\s\3\7\k\r\2\0\m\o\q\8\j\o\t\q\2\z\l\h\p\5\t\5\3\a\5\f\g\c\g\c\9\o\r\y\b\5\m\8\e\k\f\p\4\8\6\3\a\c\z\p\c\l\e\c\d\l\j\l\5\s\4\m\j\a\h\6\3\a\t\d\4\3\z\i\e\9\l\5\5\7\3\2\5\j\n\0\k\6\c\e\0\d\w\z\9\y\c\9\t\z\f\v\5\i\b\t\s\d\0\o\q\l\y\0\7\3\u\o\q\6\6\r\r\r\1\d\o\x\a\y\c\t\3\u\c\8\e\v\p\n\d\y\e\n\j\h\6\1\c\5\6\n\o\h\l\t\h\n\x\t\u\0\3\n\f\a\u\1\a\8\n\a\3\e\v\y\z\4\p\s\h\z\2\e\2\g\u\8\g\1\j\f\a\f\1\9\k\s\6\k\g\z\z\h\8\0\a\z\c\g\s\m\c\7\0\4\p\j\c\a\o\o\g\m\d\1\u\a\p\s\7\n\s\z\x\y\s\f\3\9\g\v\e\6\n\s\u\j\z\t\u\l\q\7\j\8\7\h\j\b\0\u\v\k\6\k\z\2\2\2\l\p\2\4\y\i\p\d\n\b\2\p\c\2\c\j\w\p\j\c\1\y\r\7\l\k\g\t\u\l\1\s\u\z\j\z\r\r\b\c\s\k\7\q\e\2\4\v\b\k\9\v\m\l\c\m\7\p\r\0\r\8\t\4\l\b\z\3\q\6\j\j\u\n\l\2\a\g\1\6\4\i\4\0\3\h\z\6\s\8\y\z\k\0\k\w\k\k\5\2\d\m\b\q\l\e\8\k\q\4\m\4\j\x\d\r\y\5\4\x\6\f\f\i\w\o\6\g\6\l\f\3\7\t\x\6\w\3\8\k\g\l\j\l\8\a\m\3\z\x\k\p\5\e\b\5\9\m\a\h\p\o\e\s\g\2\z\j\s\s\8\g\a\c\l\2\i\s\5\p\h\x\i\w\v\s\u\w\r\6\p\v\t\z\d\k\i\g\j\c\w\k\k\d\a\l\k\9\u\n\j\u\a\5\p\w\i\f\5\0\n\3\c\3\v\6\c\z\n\a\a\d\c\2\e\4\t\b\t\j\u\9\b\g\d\z\9\4\1\t\s\c\f\d\l\5\t\n\u\d\d\w\f\f\u\t\7\f\u\t\a\j\l\z\j\2\k\l\x\w\0\u\9\s\8\l\3\8\g\t\y\u\d\w\i\o\i\v\6\i\s\w\0\v\c\8\x\u\6\q\e\e\i\x\2\t\q\u\o\f\b\9\4\w\0\z\4\3\k\r\z\6\t\4\t\m\1\u\m\l\3\c\o\8\6\b\y\3\d\m\8\z\n\6\x\r\b\y\c\u\q\n\3\i\t\s\8\q\9\q\t\m\4\w\x\8\m\n\0\u\l\l\d\p\1\g\e\3\w\e\r\a\g\s\v\u\6\2\6\x\6\v\h\r\5\6\a\n\c\q\t\r\a\p\u\h\v\i\7\f\6\u\w\8\w\w\h\z\l\n\u\w\u\1\p\3\3\p\j\9\g\k\2\a\r\9\u\f\e\g\y\c\s\3\v\6\6\c\5\q\q\q\o\3\4\o\6\h\d\v\1\2\c\9\b\l\a\v\u\x\b\v\0\c\r\m\o\y\4\7\w\q\b\6\a\a\9\k\v\t\1\k\w\t\k\k\9\x\i\v\z\w\j\3\w\e\8\5\0\u\a\s\h\p\3\8\t\l\y\q\2\h\c\5\f\3\x\7\4\7\7\0\m\q\e\k\a\i\3\3\4\s\c\1\0\x\3\b\3\o\2\3\a\i\z\8\o\h\u\h\s\c\w\5\i\l\j\e\c\i\5\h\9\x\j\q\7\e\f\a\m\a\d\3\2\h\2\u\f\v\t\e\e\8\7\l\v\w\u\2\g\f\a\9\8\d\c\m\q\e\3\p\b\3\u\9\h\m\z\m\i\1\m\u\o\g\p\y\7\s\q\4\o\k\1\t\x\g\j\1\1\2\d\9\f\n\5\z\s\g\a\l\0\x\5\j\9\q\0\d\6\x\r\2\s\o\m\t\c\7\h\y\v\c\m\4\j\d\z\x\u\6\5\z\v\n\w\i\8\5\u\u\s\c\x\3\q\4\7\6\z\0\7\5\a\b\f\6\w\o\i\t\m\k\j\a\b\m\b\6\1\h\3\j\d\1\i\5\q\7\9\h\2\s\f\n\s\p\i\t\5\c\2\g\l\s\f\j\7\n\q\f\q\y\u\a\4\s\u\q\e\a\e\m\0\x\a\m\q\2\5\w\w\f\u\9\e\r\o\8\w\6\m\6\z\l\k\s\j\6\d\p\r\p\v\k\t\h\g\0\w\1\o\1\l\9\1\l\d\p\h\9\8\e\7\6\u\1\1\q\a\l\w\r\r\l\z\n\a\8\v\k\u\r\x\b\h\a\m\k\r\i\0\o\z\v\a\b\l\m\d\c\v\0\m\s\z\l\1\8\n\s\i\3\l\w\o\2\1\u\v\1\f\r\b\z\r\1\j\b\2\k\c\o\h\t\s\q\3\8\x\k\w\w\n\4\p\9\4\0\6\c\c\t\5\8\e\n\o\6\1\3\g\k\q\q\7\c\y\n\q\2\3\0\w\2\p\f\a\q\4\9\6\a\t\8\9\w\c\f\f\0\y\s\7\a\3\o\j\8\9\l\0\o\p\2\7\e\v\7\9\u\w\3\n\z\f\9\p\g\3\q\h\o\v\l\c\t\s\8\n\r\p\l\i\p\m\s\d\m\z\7\6\a\p\3\5\9\2\4\g\9\6\k\p\3\0\1\z\9\h\5\f\f\s\g\m\d\l\c\w\p\6\2\i\i\a\4\g\j\w\s\n\9\n\u\8\e\d\i\u\d\f\l\z\3\a\o\5\c\7\5\7\1\k\v\9\h\4\c\i\y\k\q\e\f\3\x\9\o\b\g\6\b\u\7\g\o\7\m\j\q\8\y\j\l\x\5\h\y\d\o\k\i\j\a\3\w\4\j\4\0\e\o\b\l\w\j\9\1\1\c\d\c\n\x\j\2\6\3\d\7\s\u\6\l\d\t\2\6\t\2\u\3\5\v\3\5\c\9\d\7\y\p\8\q\z\m\x\x\0\4\q\d\8\w\2\w\m\r\a\c\2\v\u\v\8\3\j\m\s\7\s\2\y\v\d\n\6\z\b\w\y\d\y\3\0\x\d\q\3\v\z\h\g\i\m\s\k\m\6\w\c\t\l\6\2\o\r\g\e\x\a\x\9\y\8\j\l\6\9\5\w\7\t\o\r\b\u\y\e\i\e\0\1\q\4\p\7\8\x\m\2\p\m\o\r\f\i\u\q\5\3\s\j\a\h\w\l\a\7\2\b\8\x\9\4\l\p\m\u\8\w\0\b\q\n\3\3\c\e\6\k\1\1\j\y\c\d\a\r\0\k\p\c\s\m\b\u\f\3\3\t\a\e\d\e\f\0\2\p\l\i\j\5\r\c\l\8\i\g\q\d\o\f\q\f\n\7\r\u\9\6\x\s\a\r\u\2\v\6\b\a\t\a\m\5\z\7\e\i\8\4\6\t\8\u\c\g\r\t\8\7\u\b\x\c\u\o\f\o\q\t\7\y\9\x\u\h\4\c\u\v\1\9\a\b\a\y\s\r\2\d\8\c\z\1\k\6\u\m\k\4\b\3\i\9\0\l\9\r\5\d\5\d\k\2\5\2\u\k\g\u\6\z\6\q\j\o\e\u\s\q\x\b\t\o\n\1\z\c\u\7\s\e\2\r\a\e\5\j\9\m\k\2\w\y\e\o\s\2\j\4\v\u\0\e\5\j\m\p\6\a\a\c\1\5\a\z\x\p\n\e\9\7\5\j\7\4\8\g\e\8\p\2\m\7\1\1\4\w\5\j\4\t\5\v\8\z\i\s\r\m\c\x\g\y\u\l\t\p\7\0\7\z\1\d\6\s\9\h\3\l\f\0\o\0\0\b\q\d\2\u\0\i\2\a\7\h\d\t\2\3\k\1\d\m\t\e\3\l\0\8\o\q\y\6\r\8\6\k\q\j\x\n\0\0\f\9\a\j\1\2\k\n\e\i\p\4\p\y\2\z\6\l\q\3\2\e\b\d\6\e\h\s\y\d\f\4\1\r\q\3\5\g\b\0\u\l\x\m\l\l\p\a\g\5\1\y\r\z\n\q\t\v\l\s\z\o\e\y\4\w\c\p\p\i\u\n\x\h\f\g\0\d\v\3\n\9\e\4\w\n\e\i\d\e\8\f\p\u\s\j\t\d\n\i\5\b\7\8\i\w\w\f\u\r\2\0\h\s\6\n\s\k\7\a\3\i\u\g\r\7\q\u\k\7\m\b\i\d\m\k\l\s\y\h\z\r\n\9\m\v\b\d\i\7\f\t\j\e\m\o\i\9\8\8\2\g\l\p\s\p\9\1\q\r\i\u\d\y\l\p\q\k\7\m\g\4\l\x\x\b\m\4\f\g\n\n\3\d\l\9\d\5\4\3\n\r\s\4\u\d\3\j\n\y\2\f\0\d\i\4\i\q\f\3\u\l\9\y\o\u\d\t\n\3\k\s\h\l\i\h\i\z\t\b\k\r\c\i\u\y\7\v\z\d\9\z\f\t\b\t\6\e\t\8\4\s\9\r\u\e\k\1\g\i\g\a\n\y\v\1\i\n\6\u\9\y\q\l\o\t\n\b\a\g\a\v\c\j\5\3\b\x\1\q\a\a\i\m\4\u\i\5\0\g\b\y\i\s\7\x\e\q\f\5\v\w\u\2\g\k\y\0\2\l\s\t\3\u\7\j\7\z\e\i\m\g\m\k\g\f\e\e\7\5\t\0\w\f\g\i\1\3\d\f\p\n\q\9\7\n\0\b\u\q\m\v\s\m\6\8\m\q\z\u\8\x\m\9\b\i\e\m\f\9\5\4\s\i\u\2\v\v\n\5\f\y\i\2\p\i\0\0\8\e\d\k\r\w\l\t\l\j\0\q\7\e\f\e\3\m\p\1\m\2\n\q\7\i\x\w\a\r\p\o\k\p\d\y\u\i\m\a\7\j\f\t\4\f\z\8\i\j\i\j\x\1\p\o\d\e\s\t\7\1\h\7\r\8\f\k\4\z\c\c\c\9\o\m\p\w\8\s\h\w\4\r\y\e\d\1\r\o\2\r\n\9\k\b\b\d\w\g\0\s\a\0\t\3\4\b\l\6\g\j\b\h\k\f\t\h\o\0\2\7\8\9\i\p\o\l\w\m\a\i\y\t\w\j\r\k\q\1\j\g\o\2\4\h\r\x\q\k\7\o\a\r\n\g\r\x\d\a\b\a\0\8\r\g\t\u\9\p\0 ]] 00:07:48.578 ************************************ 00:07:48.578 END TEST dd_rw_offset 00:07:48.578 ************************************ 00:07:48.578 00:07:48.578 real 0m1.010s 00:07:48.578 user 0m0.663s 00:07:48.578 sys 0m0.221s 00:07:48.578 11:03:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.578 11:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.578 11:03:59 -- dd/basic_rw.sh@1 -- # cleanup 00:07:48.578 11:03:59 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:48.578 11:03:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.578 11:03:59 -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.578 11:03:59 -- dd/common.sh@12 -- # local size=0xffff 00:07:48.578 11:03:59 -- dd/common.sh@14 -- # local bs=1048576 00:07:48.578 11:03:59 -- dd/common.sh@15 -- # local count=1 00:07:48.578 11:03:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.578 11:03:59 -- dd/common.sh@18 -- # gen_conf 00:07:48.578 11:03:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:48.578 11:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.578 [2024-12-06 11:03:59.682249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.578 [2024-12-06 11:03:59.682337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70046 ] 00:07:48.578 { 00:07:48.578 "subsystems": [ 00:07:48.578 { 00:07:48.578 "subsystem": "bdev", 00:07:48.578 "config": [ 00:07:48.578 { 00:07:48.578 "params": { 00:07:48.578 "trtype": "pcie", 00:07:48.578 "traddr": "0000:00:06.0", 00:07:48.578 "name": "Nvme0" 00:07:48.578 }, 00:07:48.579 "method": "bdev_nvme_attach_controller" 00:07:48.579 }, 00:07:48.579 { 00:07:48.579 "method": "bdev_wait_for_examine" 00:07:48.579 } 00:07:48.579 ] 00:07:48.579 } 00:07:48.579 ] 00:07:48.579 } 00:07:48.837 [2024-12-06 11:03:59.823525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.837 [2024-12-06 11:03:59.854060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.837  [2024-12-06T11:04:00.243Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.096 00:07:49.096 11:04:00 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.096 ************************************ 00:07:49.096 END TEST spdk_dd_basic_rw 00:07:49.096 ************************************ 00:07:49.096 00:07:49.096 real 0m14.110s 00:07:49.096 user 0m9.978s 00:07:49.096 sys 0m2.711s 00:07:49.096 11:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.096 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.096 11:04:00 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:49.096 11:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.096 11:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.096 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.096 ************************************ 00:07:49.096 START TEST spdk_dd_posix 00:07:49.096 ************************************ 00:07:49.096 11:04:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:49.356 * Looking for test storage... 00:07:49.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.356 11:04:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.356 11:04:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.356 11:04:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.356 11:04:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.356 11:04:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.356 11:04:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.356 11:04:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.356 11:04:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.356 11:04:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.356 11:04:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.356 11:04:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.356 11:04:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.356 11:04:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.356 11:04:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.356 11:04:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.356 11:04:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.356 11:04:00 -- scripts/common.sh@344 -- # : 1 00:07:49.356 11:04:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.356 11:04:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.356 11:04:00 -- scripts/common.sh@364 -- # decimal 1 00:07:49.356 11:04:00 -- scripts/common.sh@352 -- # local d=1 00:07:49.356 11:04:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.356 11:04:00 -- scripts/common.sh@354 -- # echo 1 00:07:49.356 11:04:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.356 11:04:00 -- scripts/common.sh@365 -- # decimal 2 00:07:49.356 11:04:00 -- scripts/common.sh@352 -- # local d=2 00:07:49.356 11:04:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.356 11:04:00 -- scripts/common.sh@354 -- # echo 2 00:07:49.356 11:04:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.356 11:04:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.356 11:04:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.356 11:04:00 -- scripts/common.sh@367 -- # return 0 00:07:49.356 11:04:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.356 11:04:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.356 --rc genhtml_branch_coverage=1 00:07:49.356 --rc genhtml_function_coverage=1 00:07:49.356 --rc genhtml_legend=1 00:07:49.356 --rc geninfo_all_blocks=1 00:07:49.356 --rc geninfo_unexecuted_blocks=1 00:07:49.356 00:07:49.356 ' 00:07:49.356 11:04:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.356 --rc genhtml_branch_coverage=1 00:07:49.356 --rc genhtml_function_coverage=1 00:07:49.356 --rc genhtml_legend=1 00:07:49.356 --rc geninfo_all_blocks=1 00:07:49.356 --rc geninfo_unexecuted_blocks=1 00:07:49.356 00:07:49.356 ' 00:07:49.356 11:04:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.356 --rc genhtml_branch_coverage=1 00:07:49.356 --rc genhtml_function_coverage=1 00:07:49.356 --rc genhtml_legend=1 00:07:49.356 --rc geninfo_all_blocks=1 00:07:49.356 --rc geninfo_unexecuted_blocks=1 00:07:49.356 00:07:49.356 ' 00:07:49.356 11:04:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.356 --rc genhtml_branch_coverage=1 00:07:49.356 --rc genhtml_function_coverage=1 00:07:49.356 --rc genhtml_legend=1 00:07:49.356 --rc geninfo_all_blocks=1 00:07:49.356 --rc geninfo_unexecuted_blocks=1 00:07:49.356 00:07:49.356 ' 00:07:49.356 11:04:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.356 11:04:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.356 11:04:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.356 11:04:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.356 11:04:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.357 11:04:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.357 11:04:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.357 11:04:00 -- paths/export.sh@5 -- # export PATH 00:07:49.357 11:04:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.357 11:04:00 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:49.357 11:04:00 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:49.357 11:04:00 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:49.357 11:04:00 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:49.357 11:04:00 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.357 11:04:00 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.357 11:04:00 -- dd/posix.sh@130 -- # tests 00:07:49.357 11:04:00 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:49.357 * First test run, liburing in use 00:07:49.357 11:04:00 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:49.357 11:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.357 11:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.357 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.357 ************************************ 00:07:49.357 START TEST dd_flag_append 00:07:49.357 ************************************ 00:07:49.357 11:04:00 -- common/autotest_common.sh@1114 -- # append 00:07:49.357 11:04:00 -- dd/posix.sh@16 -- # local dump0 00:07:49.357 11:04:00 -- dd/posix.sh@17 -- # local dump1 00:07:49.357 11:04:00 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:49.357 11:04:00 -- dd/common.sh@98 -- # xtrace_disable 00:07:49.357 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.357 11:04:00 -- dd/posix.sh@19 -- # dump0=wgmi5pl3jewz8kv70f9z9oo9jghzb903 00:07:49.357 11:04:00 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:49.357 11:04:00 -- dd/common.sh@98 -- # xtrace_disable 00:07:49.357 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.357 11:04:00 -- dd/posix.sh@20 -- # dump1=l944e712k9z3nb6g1hp7jawixc6lzcnw 00:07:49.357 11:04:00 -- dd/posix.sh@22 -- # printf %s wgmi5pl3jewz8kv70f9z9oo9jghzb903 00:07:49.357 11:04:00 -- dd/posix.sh@23 -- # printf %s l944e712k9z3nb6g1hp7jawixc6lzcnw 00:07:49.357 11:04:00 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:49.357 [2024-12-06 11:04:00.438244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.357 [2024-12-06 11:04:00.438815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70110 ] 00:07:49.615 [2024-12-06 11:04:00.576023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.615 [2024-12-06 11:04:00.606503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.615  [2024-12-06T11:04:01.021Z] Copying: 32/32 [B] (average 31 kBps) 00:07:49.874 00:07:49.874 11:04:00 -- dd/posix.sh@27 -- # [[ l944e712k9z3nb6g1hp7jawixc6lzcnwwgmi5pl3jewz8kv70f9z9oo9jghzb903 == \l\9\4\4\e\7\1\2\k\9\z\3\n\b\6\g\1\h\p\7\j\a\w\i\x\c\6\l\z\c\n\w\w\g\m\i\5\p\l\3\j\e\w\z\8\k\v\7\0\f\9\z\9\o\o\9\j\g\h\z\b\9\0\3 ]] 00:07:49.874 00:07:49.874 real 0m0.427s 00:07:49.874 user 0m0.204s 00:07:49.874 sys 0m0.094s 00:07:49.874 11:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.874 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.874 ************************************ 00:07:49.874 END TEST dd_flag_append 00:07:49.874 ************************************ 00:07:49.874 11:04:00 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:49.874 11:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.874 11:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.874 11:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.874 ************************************ 00:07:49.874 START TEST dd_flag_directory 00:07:49.874 ************************************ 00:07:49.874 11:04:00 -- common/autotest_common.sh@1114 -- # directory 00:07:49.874 11:04:00 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.874 11:04:00 -- common/autotest_common.sh@650 -- # local es=0 00:07:49.874 11:04:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.874 11:04:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.874 11:04:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.874 11:04:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.874 11:04:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.874 11:04:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.874 11:04:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.874 11:04:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.874 11:04:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.874 11:04:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.874 [2024-12-06 11:04:00.906776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.874 [2024-12-06 11:04:00.906863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70142 ] 00:07:50.134 [2024-12-06 11:04:01.035544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.134 [2024-12-06 11:04:01.072083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.134 [2024-12-06 11:04:01.113283] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.134 [2024-12-06 11:04:01.113335] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.134 [2024-12-06 11:04:01.113363] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.134 [2024-12-06 11:04:01.169057] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:50.134 11:04:01 -- common/autotest_common.sh@653 -- # es=236 00:07:50.134 11:04:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.134 11:04:01 -- common/autotest_common.sh@662 -- # es=108 00:07:50.134 11:04:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.134 11:04:01 -- common/autotest_common.sh@670 -- # es=1 00:07:50.134 11:04:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.134 11:04:01 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.134 11:04:01 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.134 11:04:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.134 11:04:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.134 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.134 11:04:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.134 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.134 11:04:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.134 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.134 11:04:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.134 11:04:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.134 11:04:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.393 [2024-12-06 11:04:01.279411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.393 [2024-12-06 11:04:01.279788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70146 ] 00:07:50.393 [2024-12-06 11:04:01.416288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.393 [2024-12-06 11:04:01.446289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.393 [2024-12-06 11:04:01.488090] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.393 [2024-12-06 11:04:01.488407] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.393 [2024-12-06 11:04:01.488445] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.653 [2024-12-06 11:04:01.547746] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:50.653 11:04:01 -- common/autotest_common.sh@653 -- # es=236 00:07:50.653 11:04:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.653 11:04:01 -- common/autotest_common.sh@662 -- # es=108 00:07:50.653 11:04:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.653 11:04:01 -- common/autotest_common.sh@670 -- # es=1 00:07:50.653 11:04:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.653 00:07:50.653 real 0m0.746s 00:07:50.653 user 0m0.362s 00:07:50.653 sys 0m0.177s 00:07:50.653 11:04:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.653 11:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.653 ************************************ 00:07:50.653 END TEST dd_flag_directory 00:07:50.653 ************************************ 00:07:50.653 11:04:01 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:50.653 11:04:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.653 11:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.653 11:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.653 ************************************ 00:07:50.653 START TEST dd_flag_nofollow 00:07:50.653 ************************************ 00:07:50.653 11:04:01 -- common/autotest_common.sh@1114 -- # nofollow 00:07:50.653 11:04:01 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:50.653 11:04:01 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:50.653 11:04:01 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:50.653 11:04:01 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:50.653 11:04:01 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.653 11:04:01 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.653 11:04:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.653 11:04:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.653 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.653 11:04:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.653 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.653 11:04:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.653 11:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.653 11:04:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.653 11:04:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.653 11:04:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.653 [2024-12-06 11:04:01.721814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.653 [2024-12-06 11:04:01.721905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70175 ] 00:07:50.939 [2024-12-06 11:04:01.859890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.939 [2024-12-06 11:04:01.890615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.939 [2024-12-06 11:04:01.932078] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:50.939 [2024-12-06 11:04:01.932149] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:50.939 [2024-12-06 11:04:01.932181] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.939 [2024-12-06 11:04:01.987847] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:50.939 11:04:02 -- common/autotest_common.sh@653 -- # es=216 00:07:50.939 11:04:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.939 11:04:02 -- common/autotest_common.sh@662 -- # es=88 00:07:50.939 11:04:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.939 11:04:02 -- common/autotest_common.sh@670 -- # es=1 00:07:50.939 11:04:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.939 11:04:02 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:50.939 11:04:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.939 11:04:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:50.939 11:04:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.939 11:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.939 11:04:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.939 11:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.939 11:04:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.939 11:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.939 11:04:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.939 11:04:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.939 11:04:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.222 [2024-12-06 11:04:02.105030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.222 [2024-12-06 11:04:02.105133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70184 ] 00:07:51.222 [2024-12-06 11:04:02.243856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.222 [2024-12-06 11:04:02.275288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.222 [2024-12-06 11:04:02.316340] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:51.222 [2024-12-06 11:04:02.316718] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:51.222 [2024-12-06 11:04:02.316738] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.482 [2024-12-06 11:04:02.373883] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:51.482 11:04:02 -- common/autotest_common.sh@653 -- # es=216 00:07:51.482 11:04:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.482 11:04:02 -- common/autotest_common.sh@662 -- # es=88 00:07:51.482 11:04:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.482 11:04:02 -- common/autotest_common.sh@670 -- # es=1 00:07:51.482 11:04:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.482 11:04:02 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:51.482 11:04:02 -- dd/common.sh@98 -- # xtrace_disable 00:07:51.482 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.482 11:04:02 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.482 [2024-12-06 11:04:02.491960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.482 [2024-12-06 11:04:02.492228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70186 ] 00:07:51.741 [2024-12-06 11:04:02.628932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.741 [2024-12-06 11:04:02.661026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.741  [2024-12-06T11:04:02.888Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.741 00:07:51.741 11:04:02 -- dd/posix.sh@49 -- # [[ qqd52an9bhfqu2m4hfiuyk6cpmfzi5aruyfeusjq88fn6apjlvf5qzsbrvjr0o1eldgqqamtbnbey39imqhfiqm0nf49xcnzq0obnxry3xnj2efbe3ghswtkdxqblgq7xa1yx4z2ck6anmsifnixn3uoeuvra965xf575kspt49eeu71j3e78cqc0kol0yf5v84calxp530asrvcd85vq4np0bb2pdh4m8ap79ic9m6klg80w3fh4edmf0sfqcx2xt0l7o06v3sp4ajnn90scix62xqtb9ddpp0thmpxdky02aqe7jm9b89wvcadsh55al7umhes5urivufm7iblotnpotrkl70tv0q3yxwdrpp0m0wgapbaqewz1j2jqa9p24tq62uy24d0ymcmh0qwjkspnwz64hz5aa4thohn4waxr09jwg6i2sr3dyoost0gc45k0ikf4tfaqwpridbxu57wkim6zsqwom69v24cq907af7gcximg5v14h3q4l2q == \q\q\d\5\2\a\n\9\b\h\f\q\u\2\m\4\h\f\i\u\y\k\6\c\p\m\f\z\i\5\a\r\u\y\f\e\u\s\j\q\8\8\f\n\6\a\p\j\l\v\f\5\q\z\s\b\r\v\j\r\0\o\1\e\l\d\g\q\q\a\m\t\b\n\b\e\y\3\9\i\m\q\h\f\i\q\m\0\n\f\4\9\x\c\n\z\q\0\o\b\n\x\r\y\3\x\n\j\2\e\f\b\e\3\g\h\s\w\t\k\d\x\q\b\l\g\q\7\x\a\1\y\x\4\z\2\c\k\6\a\n\m\s\i\f\n\i\x\n\3\u\o\e\u\v\r\a\9\6\5\x\f\5\7\5\k\s\p\t\4\9\e\e\u\7\1\j\3\e\7\8\c\q\c\0\k\o\l\0\y\f\5\v\8\4\c\a\l\x\p\5\3\0\a\s\r\v\c\d\8\5\v\q\4\n\p\0\b\b\2\p\d\h\4\m\8\a\p\7\9\i\c\9\m\6\k\l\g\8\0\w\3\f\h\4\e\d\m\f\0\s\f\q\c\x\2\x\t\0\l\7\o\0\6\v\3\s\p\4\a\j\n\n\9\0\s\c\i\x\6\2\x\q\t\b\9\d\d\p\p\0\t\h\m\p\x\d\k\y\0\2\a\q\e\7\j\m\9\b\8\9\w\v\c\a\d\s\h\5\5\a\l\7\u\m\h\e\s\5\u\r\i\v\u\f\m\7\i\b\l\o\t\n\p\o\t\r\k\l\7\0\t\v\0\q\3\y\x\w\d\r\p\p\0\m\0\w\g\a\p\b\a\q\e\w\z\1\j\2\j\q\a\9\p\2\4\t\q\6\2\u\y\2\4\d\0\y\m\c\m\h\0\q\w\j\k\s\p\n\w\z\6\4\h\z\5\a\a\4\t\h\o\h\n\4\w\a\x\r\0\9\j\w\g\6\i\2\s\r\3\d\y\o\o\s\t\0\g\c\4\5\k\0\i\k\f\4\t\f\a\q\w\p\r\i\d\b\x\u\5\7\w\k\i\m\6\z\s\q\w\o\m\6\9\v\2\4\c\q\9\0\7\a\f\7\g\c\x\i\m\g\5\v\1\4\h\3\q\4\l\2\q ]] 00:07:51.741 00:07:51.741 real 0m1.193s 00:07:51.741 user 0m0.590s 00:07:51.741 sys 0m0.275s 00:07:51.741 11:04:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.741 ************************************ 00:07:51.741 END TEST dd_flag_nofollow 00:07:51.741 ************************************ 00:07:51.741 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:52.001 11:04:02 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:52.001 11:04:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:52.001 11:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.001 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:52.001 ************************************ 00:07:52.001 START TEST dd_flag_noatime 00:07:52.001 ************************************ 00:07:52.001 11:04:02 -- common/autotest_common.sh@1114 -- # noatime 00:07:52.001 11:04:02 -- dd/posix.sh@53 -- # local atime_if 00:07:52.001 11:04:02 -- dd/posix.sh@54 -- # local atime_of 00:07:52.001 11:04:02 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:52.001 11:04:02 -- dd/common.sh@98 -- # xtrace_disable 00:07:52.001 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:52.001 11:04:02 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.001 11:04:02 -- dd/posix.sh@60 -- # atime_if=1733483042 00:07:52.001 11:04:02 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.001 11:04:02 -- dd/posix.sh@61 -- # atime_of=1733483042 00:07:52.001 11:04:02 -- dd/posix.sh@66 -- # sleep 1 00:07:52.940 11:04:03 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.940 [2024-12-06 11:04:03.983302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.940 [2024-12-06 11:04:03.983397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70232 ] 00:07:53.199 [2024-12-06 11:04:04.121700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.199 [2024-12-06 11:04:04.161569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.199  [2024-12-06T11:04:04.606Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.459 00:07:53.459 11:04:04 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.459 11:04:04 -- dd/posix.sh@69 -- # (( atime_if == 1733483042 )) 00:07:53.459 11:04:04 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.459 11:04:04 -- dd/posix.sh@70 -- # (( atime_of == 1733483042 )) 00:07:53.459 11:04:04 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.459 [2024-12-06 11:04:04.441640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.459 [2024-12-06 11:04:04.441734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:07:53.459 [2024-12-06 11:04:04.582632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.718 [2024-12-06 11:04:04.622272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.718  [2024-12-06T11:04:04.865Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.718 00:07:53.718 11:04:04 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.718 11:04:04 -- dd/posix.sh@73 -- # (( atime_if < 1733483044 )) 00:07:53.718 00:07:53.718 real 0m1.944s 00:07:53.718 user 0m0.469s 00:07:53.718 sys 0m0.215s 00:07:53.718 11:04:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.718 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 ************************************ 00:07:53.718 END TEST dd_flag_noatime 00:07:53.718 ************************************ 00:07:53.977 11:04:04 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:53.977 11:04:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.977 11:04:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.977 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.977 ************************************ 00:07:53.977 START TEST dd_flags_misc 00:07:53.977 ************************************ 00:07:53.977 11:04:04 -- common/autotest_common.sh@1114 -- # io 00:07:53.977 11:04:04 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:53.977 11:04:04 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:53.977 11:04:04 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:53.977 11:04:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:53.977 11:04:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:53.977 11:04:04 -- dd/common.sh@98 -- # xtrace_disable 00:07:53.977 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.977 11:04:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.977 11:04:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:53.977 [2024-12-06 11:04:04.969680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.977 [2024-12-06 11:04:04.969766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70265 ] 00:07:53.977 [2024-12-06 11:04:05.103161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.235 [2024-12-06 11:04:05.137909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.235  [2024-12-06T11:04:05.382Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.235 00:07:54.235 11:04:05 -- dd/posix.sh@93 -- # [[ cfkwhps57bd9a3czg5zikwgbxgjtxpv3i24rqtdythz80rppcw8o8vk0a0j578ooz6cnab6rudordhaqiepvmrkbxfl91prs7jm7rz73ly0kokjuvqtq6quje8s52cbxw2zd61cxj4v9s17br1c1d2y5ddfr56h2u4enuiqi5vqpw5iunkva7jxnwocm3a1b3ulhp5ms80d7xtf3dtsmcmg7fxgs3k4o7jyxfo7bjg5hfe7oyldeva6zxmjmgsd3xw2mj1m8mtaxmjnh4mlgq3v70jw02taccjyejufpc09ufhgtf8fbob62sfaljyp4rlznteozlelhl48aty80bg96zcob5nqf1hk40y56kl0dfe43clhlg41unzo55ga1y8ywmpa7vizocie73eaoeujawlbmotqvyfc72c84otyrcjri41psals4mcbd0jtlr3twfghv0cfpwoai32og2fqdd033zru9or99a77lmyry9kqdkaa3hmqn7eamqb3g == \c\f\k\w\h\p\s\5\7\b\d\9\a\3\c\z\g\5\z\i\k\w\g\b\x\g\j\t\x\p\v\3\i\2\4\r\q\t\d\y\t\h\z\8\0\r\p\p\c\w\8\o\8\v\k\0\a\0\j\5\7\8\o\o\z\6\c\n\a\b\6\r\u\d\o\r\d\h\a\q\i\e\p\v\m\r\k\b\x\f\l\9\1\p\r\s\7\j\m\7\r\z\7\3\l\y\0\k\o\k\j\u\v\q\t\q\6\q\u\j\e\8\s\5\2\c\b\x\w\2\z\d\6\1\c\x\j\4\v\9\s\1\7\b\r\1\c\1\d\2\y\5\d\d\f\r\5\6\h\2\u\4\e\n\u\i\q\i\5\v\q\p\w\5\i\u\n\k\v\a\7\j\x\n\w\o\c\m\3\a\1\b\3\u\l\h\p\5\m\s\8\0\d\7\x\t\f\3\d\t\s\m\c\m\g\7\f\x\g\s\3\k\4\o\7\j\y\x\f\o\7\b\j\g\5\h\f\e\7\o\y\l\d\e\v\a\6\z\x\m\j\m\g\s\d\3\x\w\2\m\j\1\m\8\m\t\a\x\m\j\n\h\4\m\l\g\q\3\v\7\0\j\w\0\2\t\a\c\c\j\y\e\j\u\f\p\c\0\9\u\f\h\g\t\f\8\f\b\o\b\6\2\s\f\a\l\j\y\p\4\r\l\z\n\t\e\o\z\l\e\l\h\l\4\8\a\t\y\8\0\b\g\9\6\z\c\o\b\5\n\q\f\1\h\k\4\0\y\5\6\k\l\0\d\f\e\4\3\c\l\h\l\g\4\1\u\n\z\o\5\5\g\a\1\y\8\y\w\m\p\a\7\v\i\z\o\c\i\e\7\3\e\a\o\e\u\j\a\w\l\b\m\o\t\q\v\y\f\c\7\2\c\8\4\o\t\y\r\c\j\r\i\4\1\p\s\a\l\s\4\m\c\b\d\0\j\t\l\r\3\t\w\f\g\h\v\0\c\f\p\w\o\a\i\3\2\o\g\2\f\q\d\d\0\3\3\z\r\u\9\o\r\9\9\a\7\7\l\m\y\r\y\9\k\q\d\k\a\a\3\h\m\q\n\7\e\a\m\q\b\3\g ]] 00:07:54.235 11:04:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.235 11:04:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:54.493 [2024-12-06 11:04:05.388782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.493 [2024-12-06 11:04:05.388876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70272 ] 00:07:54.493 [2024-12-06 11:04:05.526989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.493 [2024-12-06 11:04:05.557700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.493  [2024-12-06T11:04:05.898Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.751 00:07:54.751 11:04:05 -- dd/posix.sh@93 -- # [[ cfkwhps57bd9a3czg5zikwgbxgjtxpv3i24rqtdythz80rppcw8o8vk0a0j578ooz6cnab6rudordhaqiepvmrkbxfl91prs7jm7rz73ly0kokjuvqtq6quje8s52cbxw2zd61cxj4v9s17br1c1d2y5ddfr56h2u4enuiqi5vqpw5iunkva7jxnwocm3a1b3ulhp5ms80d7xtf3dtsmcmg7fxgs3k4o7jyxfo7bjg5hfe7oyldeva6zxmjmgsd3xw2mj1m8mtaxmjnh4mlgq3v70jw02taccjyejufpc09ufhgtf8fbob62sfaljyp4rlznteozlelhl48aty80bg96zcob5nqf1hk40y56kl0dfe43clhlg41unzo55ga1y8ywmpa7vizocie73eaoeujawlbmotqvyfc72c84otyrcjri41psals4mcbd0jtlr3twfghv0cfpwoai32og2fqdd033zru9or99a77lmyry9kqdkaa3hmqn7eamqb3g == \c\f\k\w\h\p\s\5\7\b\d\9\a\3\c\z\g\5\z\i\k\w\g\b\x\g\j\t\x\p\v\3\i\2\4\r\q\t\d\y\t\h\z\8\0\r\p\p\c\w\8\o\8\v\k\0\a\0\j\5\7\8\o\o\z\6\c\n\a\b\6\r\u\d\o\r\d\h\a\q\i\e\p\v\m\r\k\b\x\f\l\9\1\p\r\s\7\j\m\7\r\z\7\3\l\y\0\k\o\k\j\u\v\q\t\q\6\q\u\j\e\8\s\5\2\c\b\x\w\2\z\d\6\1\c\x\j\4\v\9\s\1\7\b\r\1\c\1\d\2\y\5\d\d\f\r\5\6\h\2\u\4\e\n\u\i\q\i\5\v\q\p\w\5\i\u\n\k\v\a\7\j\x\n\w\o\c\m\3\a\1\b\3\u\l\h\p\5\m\s\8\0\d\7\x\t\f\3\d\t\s\m\c\m\g\7\f\x\g\s\3\k\4\o\7\j\y\x\f\o\7\b\j\g\5\h\f\e\7\o\y\l\d\e\v\a\6\z\x\m\j\m\g\s\d\3\x\w\2\m\j\1\m\8\m\t\a\x\m\j\n\h\4\m\l\g\q\3\v\7\0\j\w\0\2\t\a\c\c\j\y\e\j\u\f\p\c\0\9\u\f\h\g\t\f\8\f\b\o\b\6\2\s\f\a\l\j\y\p\4\r\l\z\n\t\e\o\z\l\e\l\h\l\4\8\a\t\y\8\0\b\g\9\6\z\c\o\b\5\n\q\f\1\h\k\4\0\y\5\6\k\l\0\d\f\e\4\3\c\l\h\l\g\4\1\u\n\z\o\5\5\g\a\1\y\8\y\w\m\p\a\7\v\i\z\o\c\i\e\7\3\e\a\o\e\u\j\a\w\l\b\m\o\t\q\v\y\f\c\7\2\c\8\4\o\t\y\r\c\j\r\i\4\1\p\s\a\l\s\4\m\c\b\d\0\j\t\l\r\3\t\w\f\g\h\v\0\c\f\p\w\o\a\i\3\2\o\g\2\f\q\d\d\0\3\3\z\r\u\9\o\r\9\9\a\7\7\l\m\y\r\y\9\k\q\d\k\a\a\3\h\m\q\n\7\e\a\m\q\b\3\g ]] 00:07:54.751 11:04:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.751 11:04:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:54.751 [2024-12-06 11:04:05.796356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.751 [2024-12-06 11:04:05.796448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70279 ] 00:07:55.009 [2024-12-06 11:04:05.935304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.009 [2024-12-06 11:04:05.967767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.009  [2024-12-06T11:04:06.414Z] Copying: 512/512 [B] (average 250 kBps) 00:07:55.267 00:07:55.267 11:04:06 -- dd/posix.sh@93 -- # [[ cfkwhps57bd9a3czg5zikwgbxgjtxpv3i24rqtdythz80rppcw8o8vk0a0j578ooz6cnab6rudordhaqiepvmrkbxfl91prs7jm7rz73ly0kokjuvqtq6quje8s52cbxw2zd61cxj4v9s17br1c1d2y5ddfr56h2u4enuiqi5vqpw5iunkva7jxnwocm3a1b3ulhp5ms80d7xtf3dtsmcmg7fxgs3k4o7jyxfo7bjg5hfe7oyldeva6zxmjmgsd3xw2mj1m8mtaxmjnh4mlgq3v70jw02taccjyejufpc09ufhgtf8fbob62sfaljyp4rlznteozlelhl48aty80bg96zcob5nqf1hk40y56kl0dfe43clhlg41unzo55ga1y8ywmpa7vizocie73eaoeujawlbmotqvyfc72c84otyrcjri41psals4mcbd0jtlr3twfghv0cfpwoai32og2fqdd033zru9or99a77lmyry9kqdkaa3hmqn7eamqb3g == \c\f\k\w\h\p\s\5\7\b\d\9\a\3\c\z\g\5\z\i\k\w\g\b\x\g\j\t\x\p\v\3\i\2\4\r\q\t\d\y\t\h\z\8\0\r\p\p\c\w\8\o\8\v\k\0\a\0\j\5\7\8\o\o\z\6\c\n\a\b\6\r\u\d\o\r\d\h\a\q\i\e\p\v\m\r\k\b\x\f\l\9\1\p\r\s\7\j\m\7\r\z\7\3\l\y\0\k\o\k\j\u\v\q\t\q\6\q\u\j\e\8\s\5\2\c\b\x\w\2\z\d\6\1\c\x\j\4\v\9\s\1\7\b\r\1\c\1\d\2\y\5\d\d\f\r\5\6\h\2\u\4\e\n\u\i\q\i\5\v\q\p\w\5\i\u\n\k\v\a\7\j\x\n\w\o\c\m\3\a\1\b\3\u\l\h\p\5\m\s\8\0\d\7\x\t\f\3\d\t\s\m\c\m\g\7\f\x\g\s\3\k\4\o\7\j\y\x\f\o\7\b\j\g\5\h\f\e\7\o\y\l\d\e\v\a\6\z\x\m\j\m\g\s\d\3\x\w\2\m\j\1\m\8\m\t\a\x\m\j\n\h\4\m\l\g\q\3\v\7\0\j\w\0\2\t\a\c\c\j\y\e\j\u\f\p\c\0\9\u\f\h\g\t\f\8\f\b\o\b\6\2\s\f\a\l\j\y\p\4\r\l\z\n\t\e\o\z\l\e\l\h\l\4\8\a\t\y\8\0\b\g\9\6\z\c\o\b\5\n\q\f\1\h\k\4\0\y\5\6\k\l\0\d\f\e\4\3\c\l\h\l\g\4\1\u\n\z\o\5\5\g\a\1\y\8\y\w\m\p\a\7\v\i\z\o\c\i\e\7\3\e\a\o\e\u\j\a\w\l\b\m\o\t\q\v\y\f\c\7\2\c\8\4\o\t\y\r\c\j\r\i\4\1\p\s\a\l\s\4\m\c\b\d\0\j\t\l\r\3\t\w\f\g\h\v\0\c\f\p\w\o\a\i\3\2\o\g\2\f\q\d\d\0\3\3\z\r\u\9\o\r\9\9\a\7\7\l\m\y\r\y\9\k\q\d\k\a\a\3\h\m\q\n\7\e\a\m\q\b\3\g ]] 00:07:55.267 11:04:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.267 11:04:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.267 [2024-12-06 11:04:06.222361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.267 [2024-12-06 11:04:06.222465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70287 ] 00:07:55.267 [2024-12-06 11:04:06.359568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.267 [2024-12-06 11:04:06.390101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.526  [2024-12-06T11:04:06.673Z] Copying: 512/512 [B] (average 250 kBps) 00:07:55.526 00:07:55.526 11:04:06 -- dd/posix.sh@93 -- # [[ cfkwhps57bd9a3czg5zikwgbxgjtxpv3i24rqtdythz80rppcw8o8vk0a0j578ooz6cnab6rudordhaqiepvmrkbxfl91prs7jm7rz73ly0kokjuvqtq6quje8s52cbxw2zd61cxj4v9s17br1c1d2y5ddfr56h2u4enuiqi5vqpw5iunkva7jxnwocm3a1b3ulhp5ms80d7xtf3dtsmcmg7fxgs3k4o7jyxfo7bjg5hfe7oyldeva6zxmjmgsd3xw2mj1m8mtaxmjnh4mlgq3v70jw02taccjyejufpc09ufhgtf8fbob62sfaljyp4rlznteozlelhl48aty80bg96zcob5nqf1hk40y56kl0dfe43clhlg41unzo55ga1y8ywmpa7vizocie73eaoeujawlbmotqvyfc72c84otyrcjri41psals4mcbd0jtlr3twfghv0cfpwoai32og2fqdd033zru9or99a77lmyry9kqdkaa3hmqn7eamqb3g == \c\f\k\w\h\p\s\5\7\b\d\9\a\3\c\z\g\5\z\i\k\w\g\b\x\g\j\t\x\p\v\3\i\2\4\r\q\t\d\y\t\h\z\8\0\r\p\p\c\w\8\o\8\v\k\0\a\0\j\5\7\8\o\o\z\6\c\n\a\b\6\r\u\d\o\r\d\h\a\q\i\e\p\v\m\r\k\b\x\f\l\9\1\p\r\s\7\j\m\7\r\z\7\3\l\y\0\k\o\k\j\u\v\q\t\q\6\q\u\j\e\8\s\5\2\c\b\x\w\2\z\d\6\1\c\x\j\4\v\9\s\1\7\b\r\1\c\1\d\2\y\5\d\d\f\r\5\6\h\2\u\4\e\n\u\i\q\i\5\v\q\p\w\5\i\u\n\k\v\a\7\j\x\n\w\o\c\m\3\a\1\b\3\u\l\h\p\5\m\s\8\0\d\7\x\t\f\3\d\t\s\m\c\m\g\7\f\x\g\s\3\k\4\o\7\j\y\x\f\o\7\b\j\g\5\h\f\e\7\o\y\l\d\e\v\a\6\z\x\m\j\m\g\s\d\3\x\w\2\m\j\1\m\8\m\t\a\x\m\j\n\h\4\m\l\g\q\3\v\7\0\j\w\0\2\t\a\c\c\j\y\e\j\u\f\p\c\0\9\u\f\h\g\t\f\8\f\b\o\b\6\2\s\f\a\l\j\y\p\4\r\l\z\n\t\e\o\z\l\e\l\h\l\4\8\a\t\y\8\0\b\g\9\6\z\c\o\b\5\n\q\f\1\h\k\4\0\y\5\6\k\l\0\d\f\e\4\3\c\l\h\l\g\4\1\u\n\z\o\5\5\g\a\1\y\8\y\w\m\p\a\7\v\i\z\o\c\i\e\7\3\e\a\o\e\u\j\a\w\l\b\m\o\t\q\v\y\f\c\7\2\c\8\4\o\t\y\r\c\j\r\i\4\1\p\s\a\l\s\4\m\c\b\d\0\j\t\l\r\3\t\w\f\g\h\v\0\c\f\p\w\o\a\i\3\2\o\g\2\f\q\d\d\0\3\3\z\r\u\9\o\r\9\9\a\7\7\l\m\y\r\y\9\k\q\d\k\a\a\3\h\m\q\n\7\e\a\m\q\b\3\g ]] 00:07:55.526 11:04:06 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:55.526 11:04:06 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:55.526 11:04:06 -- dd/common.sh@98 -- # xtrace_disable 00:07:55.526 11:04:06 -- common/autotest_common.sh@10 -- # set +x 00:07:55.526 11:04:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.526 11:04:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:55.526 [2024-12-06 11:04:06.642020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.526 [2024-12-06 11:04:06.642122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70289 ] 00:07:55.786 [2024-12-06 11:04:06.776563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.786 [2024-12-06 11:04:06.809220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.786  [2024-12-06T11:04:07.192Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.045 00:07:56.045 11:04:06 -- dd/posix.sh@93 -- # [[ yz5fqnxhq7xq45p3ct0jhr3phrmg0djceyk0843ol3dfcki4lxtlw32l2v5vi3r1stikitvqwmwim7dz7r10unqldtzk03lhqoypfr0yshzrp00irf4ev39sgi2p029h3un5pcslv2duumg0scwljfnr709tg7wbi6x6z98v36akqaytzxtajuwxktw41aiagzd7xikn127vt3nr6vlgnzlh9by9e4gyg20b1m1gndjuetnjpsgs9xv7be4iw4p45waxml470rwzn3lkl3990qbkhmff8oruuw6tu8369ganrdu7hce4av1yb30093abru2o57o92befdh7wth6861jn9jujtox0hql8ql6pud7ps285hnmzsblln08zolh6whxlnk9zcdg4b5a49njxf87b29wn5i583y0zawgd38vuydmmfxqvpwb81bsrbklrwo00y6noduasg8hufvvg2aazt0qr5bt3t0l976xgzknpqncim9kopdppmk9yq6p2 == \y\z\5\f\q\n\x\h\q\7\x\q\4\5\p\3\c\t\0\j\h\r\3\p\h\r\m\g\0\d\j\c\e\y\k\0\8\4\3\o\l\3\d\f\c\k\i\4\l\x\t\l\w\3\2\l\2\v\5\v\i\3\r\1\s\t\i\k\i\t\v\q\w\m\w\i\m\7\d\z\7\r\1\0\u\n\q\l\d\t\z\k\0\3\l\h\q\o\y\p\f\r\0\y\s\h\z\r\p\0\0\i\r\f\4\e\v\3\9\s\g\i\2\p\0\2\9\h\3\u\n\5\p\c\s\l\v\2\d\u\u\m\g\0\s\c\w\l\j\f\n\r\7\0\9\t\g\7\w\b\i\6\x\6\z\9\8\v\3\6\a\k\q\a\y\t\z\x\t\a\j\u\w\x\k\t\w\4\1\a\i\a\g\z\d\7\x\i\k\n\1\2\7\v\t\3\n\r\6\v\l\g\n\z\l\h\9\b\y\9\e\4\g\y\g\2\0\b\1\m\1\g\n\d\j\u\e\t\n\j\p\s\g\s\9\x\v\7\b\e\4\i\w\4\p\4\5\w\a\x\m\l\4\7\0\r\w\z\n\3\l\k\l\3\9\9\0\q\b\k\h\m\f\f\8\o\r\u\u\w\6\t\u\8\3\6\9\g\a\n\r\d\u\7\h\c\e\4\a\v\1\y\b\3\0\0\9\3\a\b\r\u\2\o\5\7\o\9\2\b\e\f\d\h\7\w\t\h\6\8\6\1\j\n\9\j\u\j\t\o\x\0\h\q\l\8\q\l\6\p\u\d\7\p\s\2\8\5\h\n\m\z\s\b\l\l\n\0\8\z\o\l\h\6\w\h\x\l\n\k\9\z\c\d\g\4\b\5\a\4\9\n\j\x\f\8\7\b\2\9\w\n\5\i\5\8\3\y\0\z\a\w\g\d\3\8\v\u\y\d\m\m\f\x\q\v\p\w\b\8\1\b\s\r\b\k\l\r\w\o\0\0\y\6\n\o\d\u\a\s\g\8\h\u\f\v\v\g\2\a\a\z\t\0\q\r\5\b\t\3\t\0\l\9\7\6\x\g\z\k\n\p\q\n\c\i\m\9\k\o\p\d\p\p\m\k\9\y\q\6\p\2 ]] 00:07:56.045 11:04:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.045 11:04:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:56.045 [2024-12-06 11:04:07.044507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.045 [2024-12-06 11:04:07.045104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70297 ] 00:07:56.045 [2024-12-06 11:04:07.184257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.305 [2024-12-06 11:04:07.220383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.305  [2024-12-06T11:04:07.452Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.305 00:07:56.305 11:04:07 -- dd/posix.sh@93 -- # [[ yz5fqnxhq7xq45p3ct0jhr3phrmg0djceyk0843ol3dfcki4lxtlw32l2v5vi3r1stikitvqwmwim7dz7r10unqldtzk03lhqoypfr0yshzrp00irf4ev39sgi2p029h3un5pcslv2duumg0scwljfnr709tg7wbi6x6z98v36akqaytzxtajuwxktw41aiagzd7xikn127vt3nr6vlgnzlh9by9e4gyg20b1m1gndjuetnjpsgs9xv7be4iw4p45waxml470rwzn3lkl3990qbkhmff8oruuw6tu8369ganrdu7hce4av1yb30093abru2o57o92befdh7wth6861jn9jujtox0hql8ql6pud7ps285hnmzsblln08zolh6whxlnk9zcdg4b5a49njxf87b29wn5i583y0zawgd38vuydmmfxqvpwb81bsrbklrwo00y6noduasg8hufvvg2aazt0qr5bt3t0l976xgzknpqncim9kopdppmk9yq6p2 == \y\z\5\f\q\n\x\h\q\7\x\q\4\5\p\3\c\t\0\j\h\r\3\p\h\r\m\g\0\d\j\c\e\y\k\0\8\4\3\o\l\3\d\f\c\k\i\4\l\x\t\l\w\3\2\l\2\v\5\v\i\3\r\1\s\t\i\k\i\t\v\q\w\m\w\i\m\7\d\z\7\r\1\0\u\n\q\l\d\t\z\k\0\3\l\h\q\o\y\p\f\r\0\y\s\h\z\r\p\0\0\i\r\f\4\e\v\3\9\s\g\i\2\p\0\2\9\h\3\u\n\5\p\c\s\l\v\2\d\u\u\m\g\0\s\c\w\l\j\f\n\r\7\0\9\t\g\7\w\b\i\6\x\6\z\9\8\v\3\6\a\k\q\a\y\t\z\x\t\a\j\u\w\x\k\t\w\4\1\a\i\a\g\z\d\7\x\i\k\n\1\2\7\v\t\3\n\r\6\v\l\g\n\z\l\h\9\b\y\9\e\4\g\y\g\2\0\b\1\m\1\g\n\d\j\u\e\t\n\j\p\s\g\s\9\x\v\7\b\e\4\i\w\4\p\4\5\w\a\x\m\l\4\7\0\r\w\z\n\3\l\k\l\3\9\9\0\q\b\k\h\m\f\f\8\o\r\u\u\w\6\t\u\8\3\6\9\g\a\n\r\d\u\7\h\c\e\4\a\v\1\y\b\3\0\0\9\3\a\b\r\u\2\o\5\7\o\9\2\b\e\f\d\h\7\w\t\h\6\8\6\1\j\n\9\j\u\j\t\o\x\0\h\q\l\8\q\l\6\p\u\d\7\p\s\2\8\5\h\n\m\z\s\b\l\l\n\0\8\z\o\l\h\6\w\h\x\l\n\k\9\z\c\d\g\4\b\5\a\4\9\n\j\x\f\8\7\b\2\9\w\n\5\i\5\8\3\y\0\z\a\w\g\d\3\8\v\u\y\d\m\m\f\x\q\v\p\w\b\8\1\b\s\r\b\k\l\r\w\o\0\0\y\6\n\o\d\u\a\s\g\8\h\u\f\v\v\g\2\a\a\z\t\0\q\r\5\b\t\3\t\0\l\9\7\6\x\g\z\k\n\p\q\n\c\i\m\9\k\o\p\d\p\p\m\k\9\y\q\6\p\2 ]] 00:07:56.305 11:04:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.305 11:04:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:56.565 [2024-12-06 11:04:07.456070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.565 [2024-12-06 11:04:07.456179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70304 ] 00:07:56.565 [2024-12-06 11:04:07.592030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.565 [2024-12-06 11:04:07.622027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.565  [2024-12-06T11:04:07.972Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.825 00:07:56.825 11:04:07 -- dd/posix.sh@93 -- # [[ yz5fqnxhq7xq45p3ct0jhr3phrmg0djceyk0843ol3dfcki4lxtlw32l2v5vi3r1stikitvqwmwim7dz7r10unqldtzk03lhqoypfr0yshzrp00irf4ev39sgi2p029h3un5pcslv2duumg0scwljfnr709tg7wbi6x6z98v36akqaytzxtajuwxktw41aiagzd7xikn127vt3nr6vlgnzlh9by9e4gyg20b1m1gndjuetnjpsgs9xv7be4iw4p45waxml470rwzn3lkl3990qbkhmff8oruuw6tu8369ganrdu7hce4av1yb30093abru2o57o92befdh7wth6861jn9jujtox0hql8ql6pud7ps285hnmzsblln08zolh6whxlnk9zcdg4b5a49njxf87b29wn5i583y0zawgd38vuydmmfxqvpwb81bsrbklrwo00y6noduasg8hufvvg2aazt0qr5bt3t0l976xgzknpqncim9kopdppmk9yq6p2 == \y\z\5\f\q\n\x\h\q\7\x\q\4\5\p\3\c\t\0\j\h\r\3\p\h\r\m\g\0\d\j\c\e\y\k\0\8\4\3\o\l\3\d\f\c\k\i\4\l\x\t\l\w\3\2\l\2\v\5\v\i\3\r\1\s\t\i\k\i\t\v\q\w\m\w\i\m\7\d\z\7\r\1\0\u\n\q\l\d\t\z\k\0\3\l\h\q\o\y\p\f\r\0\y\s\h\z\r\p\0\0\i\r\f\4\e\v\3\9\s\g\i\2\p\0\2\9\h\3\u\n\5\p\c\s\l\v\2\d\u\u\m\g\0\s\c\w\l\j\f\n\r\7\0\9\t\g\7\w\b\i\6\x\6\z\9\8\v\3\6\a\k\q\a\y\t\z\x\t\a\j\u\w\x\k\t\w\4\1\a\i\a\g\z\d\7\x\i\k\n\1\2\7\v\t\3\n\r\6\v\l\g\n\z\l\h\9\b\y\9\e\4\g\y\g\2\0\b\1\m\1\g\n\d\j\u\e\t\n\j\p\s\g\s\9\x\v\7\b\e\4\i\w\4\p\4\5\w\a\x\m\l\4\7\0\r\w\z\n\3\l\k\l\3\9\9\0\q\b\k\h\m\f\f\8\o\r\u\u\w\6\t\u\8\3\6\9\g\a\n\r\d\u\7\h\c\e\4\a\v\1\y\b\3\0\0\9\3\a\b\r\u\2\o\5\7\o\9\2\b\e\f\d\h\7\w\t\h\6\8\6\1\j\n\9\j\u\j\t\o\x\0\h\q\l\8\q\l\6\p\u\d\7\p\s\2\8\5\h\n\m\z\s\b\l\l\n\0\8\z\o\l\h\6\w\h\x\l\n\k\9\z\c\d\g\4\b\5\a\4\9\n\j\x\f\8\7\b\2\9\w\n\5\i\5\8\3\y\0\z\a\w\g\d\3\8\v\u\y\d\m\m\f\x\q\v\p\w\b\8\1\b\s\r\b\k\l\r\w\o\0\0\y\6\n\o\d\u\a\s\g\8\h\u\f\v\v\g\2\a\a\z\t\0\q\r\5\b\t\3\t\0\l\9\7\6\x\g\z\k\n\p\q\n\c\i\m\9\k\o\p\d\p\p\m\k\9\y\q\6\p\2 ]] 00:07:56.825 11:04:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.825 11:04:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.825 [2024-12-06 11:04:07.849920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.825 [2024-12-06 11:04:07.850020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70312 ] 00:07:57.084 [2024-12-06 11:04:07.977346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.084 [2024-12-06 11:04:08.008298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.084  [2024-12-06T11:04:08.231Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.084 00:07:57.084 11:04:08 -- dd/posix.sh@93 -- # [[ yz5fqnxhq7xq45p3ct0jhr3phrmg0djceyk0843ol3dfcki4lxtlw32l2v5vi3r1stikitvqwmwim7dz7r10unqldtzk03lhqoypfr0yshzrp00irf4ev39sgi2p029h3un5pcslv2duumg0scwljfnr709tg7wbi6x6z98v36akqaytzxtajuwxktw41aiagzd7xikn127vt3nr6vlgnzlh9by9e4gyg20b1m1gndjuetnjpsgs9xv7be4iw4p45waxml470rwzn3lkl3990qbkhmff8oruuw6tu8369ganrdu7hce4av1yb30093abru2o57o92befdh7wth6861jn9jujtox0hql8ql6pud7ps285hnmzsblln08zolh6whxlnk9zcdg4b5a49njxf87b29wn5i583y0zawgd38vuydmmfxqvpwb81bsrbklrwo00y6noduasg8hufvvg2aazt0qr5bt3t0l976xgzknpqncim9kopdppmk9yq6p2 == \y\z\5\f\q\n\x\h\q\7\x\q\4\5\p\3\c\t\0\j\h\r\3\p\h\r\m\g\0\d\j\c\e\y\k\0\8\4\3\o\l\3\d\f\c\k\i\4\l\x\t\l\w\3\2\l\2\v\5\v\i\3\r\1\s\t\i\k\i\t\v\q\w\m\w\i\m\7\d\z\7\r\1\0\u\n\q\l\d\t\z\k\0\3\l\h\q\o\y\p\f\r\0\y\s\h\z\r\p\0\0\i\r\f\4\e\v\3\9\s\g\i\2\p\0\2\9\h\3\u\n\5\p\c\s\l\v\2\d\u\u\m\g\0\s\c\w\l\j\f\n\r\7\0\9\t\g\7\w\b\i\6\x\6\z\9\8\v\3\6\a\k\q\a\y\t\z\x\t\a\j\u\w\x\k\t\w\4\1\a\i\a\g\z\d\7\x\i\k\n\1\2\7\v\t\3\n\r\6\v\l\g\n\z\l\h\9\b\y\9\e\4\g\y\g\2\0\b\1\m\1\g\n\d\j\u\e\t\n\j\p\s\g\s\9\x\v\7\b\e\4\i\w\4\p\4\5\w\a\x\m\l\4\7\0\r\w\z\n\3\l\k\l\3\9\9\0\q\b\k\h\m\f\f\8\o\r\u\u\w\6\t\u\8\3\6\9\g\a\n\r\d\u\7\h\c\e\4\a\v\1\y\b\3\0\0\9\3\a\b\r\u\2\o\5\7\o\9\2\b\e\f\d\h\7\w\t\h\6\8\6\1\j\n\9\j\u\j\t\o\x\0\h\q\l\8\q\l\6\p\u\d\7\p\s\2\8\5\h\n\m\z\s\b\l\l\n\0\8\z\o\l\h\6\w\h\x\l\n\k\9\z\c\d\g\4\b\5\a\4\9\n\j\x\f\8\7\b\2\9\w\n\5\i\5\8\3\y\0\z\a\w\g\d\3\8\v\u\y\d\m\m\f\x\q\v\p\w\b\8\1\b\s\r\b\k\l\r\w\o\0\0\y\6\n\o\d\u\a\s\g\8\h\u\f\v\v\g\2\a\a\z\t\0\q\r\5\b\t\3\t\0\l\9\7\6\x\g\z\k\n\p\q\n\c\i\m\9\k\o\p\d\p\p\m\k\9\y\q\6\p\2 ]] 00:07:57.084 00:07:57.084 real 0m3.309s 00:07:57.084 user 0m1.594s 00:07:57.084 sys 0m0.730s 00:07:57.084 ************************************ 00:07:57.084 END TEST dd_flags_misc 00:07:57.084 ************************************ 00:07:57.084 11:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.084 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.344 11:04:08 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:57.344 11:04:08 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:57.344 * Second test run, disabling liburing, forcing AIO 00:07:57.344 11:04:08 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:57.344 11:04:08 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:57.344 11:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.344 11:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.344 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.344 ************************************ 00:07:57.344 START TEST dd_flag_append_forced_aio 00:07:57.344 ************************************ 00:07:57.344 11:04:08 -- common/autotest_common.sh@1114 -- # append 00:07:57.344 11:04:08 -- dd/posix.sh@16 -- # local dump0 00:07:57.344 11:04:08 -- dd/posix.sh@17 -- # local dump1 00:07:57.344 11:04:08 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:57.344 11:04:08 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.344 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.344 11:04:08 -- dd/posix.sh@19 -- # dump0=baa4vvc51ey0wo0yci1j0lpmdj12j7s5 00:07:57.344 11:04:08 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:57.344 11:04:08 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.344 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.344 11:04:08 -- dd/posix.sh@20 -- # dump1=iatgfqbiyn5pjajo7cxygxbu3ck2i97g 00:07:57.344 11:04:08 -- dd/posix.sh@22 -- # printf %s baa4vvc51ey0wo0yci1j0lpmdj12j7s5 00:07:57.344 11:04:08 -- dd/posix.sh@23 -- # printf %s iatgfqbiyn5pjajo7cxygxbu3ck2i97g 00:07:57.344 11:04:08 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:57.344 [2024-12-06 11:04:08.327263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.344 [2024-12-06 11:04:08.327353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70338 ] 00:07:57.344 [2024-12-06 11:04:08.464854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.604 [2024-12-06 11:04:08.496850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.604  [2024-12-06T11:04:08.751Z] Copying: 32/32 [B] (average 31 kBps) 00:07:57.604 00:07:57.604 11:04:08 -- dd/posix.sh@27 -- # [[ iatgfqbiyn5pjajo7cxygxbu3ck2i97gbaa4vvc51ey0wo0yci1j0lpmdj12j7s5 == \i\a\t\g\f\q\b\i\y\n\5\p\j\a\j\o\7\c\x\y\g\x\b\u\3\c\k\2\i\9\7\g\b\a\a\4\v\v\c\5\1\e\y\0\w\o\0\y\c\i\1\j\0\l\p\m\d\j\1\2\j\7\s\5 ]] 00:07:57.604 00:07:57.604 real 0m0.415s 00:07:57.604 user 0m0.210s 00:07:57.604 sys 0m0.088s 00:07:57.604 ************************************ 00:07:57.604 END TEST dd_flag_append_forced_aio 00:07:57.604 ************************************ 00:07:57.604 11:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.604 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.604 11:04:08 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:57.604 11:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.604 11:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.604 11:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:57.604 ************************************ 00:07:57.604 START TEST dd_flag_directory_forced_aio 00:07:57.604 ************************************ 00:07:57.604 11:04:08 -- common/autotest_common.sh@1114 -- # directory 00:07:57.604 11:04:08 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.604 11:04:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:57.604 11:04:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.604 11:04:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.604 11:04:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.604 11:04:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.604 11:04:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.604 11:04:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.604 11:04:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.604 11:04:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.604 11:04:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.604 11:04:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.863 [2024-12-06 11:04:08.789784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.863 [2024-12-06 11:04:08.789884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70359 ] 00:07:57.863 [2024-12-06 11:04:08.929528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.863 [2024-12-06 11:04:08.963446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.122 [2024-12-06 11:04:09.009060] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.122 [2024-12-06 11:04:09.009175] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.122 [2024-12-06 11:04:09.009203] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.123 [2024-12-06 11:04:09.068324] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.123 11:04:09 -- common/autotest_common.sh@653 -- # es=236 00:07:58.123 11:04:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.123 11:04:09 -- common/autotest_common.sh@662 -- # es=108 00:07:58.123 11:04:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:58.123 11:04:09 -- common/autotest_common.sh@670 -- # es=1 00:07:58.123 11:04:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.123 11:04:09 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.123 11:04:09 -- common/autotest_common.sh@650 -- # local es=0 00:07:58.123 11:04:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.123 11:04:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.123 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.123 11:04:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.123 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.123 11:04:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.123 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.123 11:04:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.123 11:04:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.123 11:04:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.123 [2024-12-06 11:04:09.186053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.123 [2024-12-06 11:04:09.186156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70369 ] 00:07:58.382 [2024-12-06 11:04:09.324027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.382 [2024-12-06 11:04:09.354589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.382 [2024-12-06 11:04:09.395702] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.382 [2024-12-06 11:04:09.395781] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.382 [2024-12-06 11:04:09.395795] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.382 [2024-12-06 11:04:09.455460] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.382 11:04:09 -- common/autotest_common.sh@653 -- # es=236 00:07:58.382 11:04:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.382 11:04:09 -- common/autotest_common.sh@662 -- # es=108 00:07:58.382 11:04:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:58.382 11:04:09 -- common/autotest_common.sh@670 -- # es=1 00:07:58.382 11:04:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.382 00:07:58.382 real 0m0.781s 00:07:58.382 user 0m0.393s 00:07:58.382 sys 0m0.180s 00:07:58.382 11:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.382 11:04:09 -- common/autotest_common.sh@10 -- # set +x 00:07:58.382 ************************************ 00:07:58.382 END TEST dd_flag_directory_forced_aio 00:07:58.382 ************************************ 00:07:58.641 11:04:09 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:58.641 11:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.641 11:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.642 11:04:09 -- common/autotest_common.sh@10 -- # set +x 00:07:58.642 ************************************ 00:07:58.642 START TEST dd_flag_nofollow_forced_aio 00:07:58.642 ************************************ 00:07:58.642 11:04:09 -- common/autotest_common.sh@1114 -- # nofollow 00:07:58.642 11:04:09 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.642 11:04:09 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.642 11:04:09 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.642 11:04:09 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.642 11:04:09 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.642 11:04:09 -- common/autotest_common.sh@650 -- # local es=0 00:07:58.642 11:04:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.642 11:04:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.642 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.642 11:04:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.642 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.642 11:04:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.642 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.642 11:04:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.642 11:04:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.642 11:04:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.642 [2024-12-06 11:04:09.636717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.642 [2024-12-06 11:04:09.636816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70397 ] 00:07:58.642 [2024-12-06 11:04:09.770433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.901 [2024-12-06 11:04:09.802024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.901 [2024-12-06 11:04:09.846246] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:58.901 [2024-12-06 11:04:09.846302] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:58.901 [2024-12-06 11:04:09.846314] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.901 [2024-12-06 11:04:09.906637] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.901 11:04:09 -- common/autotest_common.sh@653 -- # es=216 00:07:58.901 11:04:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.901 11:04:09 -- common/autotest_common.sh@662 -- # es=88 00:07:58.901 11:04:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:58.901 11:04:09 -- common/autotest_common.sh@670 -- # es=1 00:07:58.901 11:04:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.901 11:04:09 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:58.901 11:04:09 -- common/autotest_common.sh@650 -- # local es=0 00:07:58.901 11:04:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:58.901 11:04:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.901 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.901 11:04:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.901 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.901 11:04:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.901 11:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.901 11:04:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.901 11:04:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.901 11:04:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:58.901 [2024-12-06 11:04:10.027191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.901 [2024-12-06 11:04:10.027300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70407 ] 00:07:59.161 [2024-12-06 11:04:10.170463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.161 [2024-12-06 11:04:10.213230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.161 [2024-12-06 11:04:10.270454] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.161 [2024-12-06 11:04:10.270522] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.161 [2024-12-06 11:04:10.270590] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.420 [2024-12-06 11:04:10.334828] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.420 11:04:10 -- common/autotest_common.sh@653 -- # es=216 00:07:59.420 11:04:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.420 11:04:10 -- common/autotest_common.sh@662 -- # es=88 00:07:59.420 11:04:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.420 11:04:10 -- common/autotest_common.sh@670 -- # es=1 00:07:59.420 11:04:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.420 11:04:10 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:59.420 11:04:10 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.420 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.420 11:04:10 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.420 [2024-12-06 11:04:10.459558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.420 [2024-12-06 11:04:10.459664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ] 00:07:59.680 [2024-12-06 11:04:10.595654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.680 [2024-12-06 11:04:10.627417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.680  [2024-12-06T11:04:10.827Z] Copying: 512/512 [B] (average 500 kBps) 00:07:59.680 00:07:59.680 11:04:10 -- dd/posix.sh@49 -- # [[ vcyff5n3t8ofwd0o78gw1gxex540flxtcuxc803zm9wznznnjrn72kvz7fp7rczewnmevf7ocqqp5vj3vm0e4v1k6yxr1984j3kdew3yhyzuv0j0vxcknu7mt31opre6xmvhph4mfcv721yrjnm4e36j04d7m2wtszwaxtb6iamx3q8tff6n93ac2zs5twli6k8mkz4kyildareaotjypbtxou9bsvr71op4a3id45p5u9eksdl5nglvbpphc9tvktg14gq06vxomdu4ymp1q43i9yuzpzcrm0as91q053p5iraza0in59oupdvwzydz49rzxxh4yhkn8w3fz6yk8ieyczcpbp3hk0qief65tolg61w3st88225wsqycxmpliij5n4hf5yyop3j39jonw6jkq978q98bc313ofjsskqbmv7nj2o2faxhwjpay6sajb9bwetgucwv8hn2af78mzhb6z5acukdzlqrx0xee21azo7l0la56lbltp4shy97 == \v\c\y\f\f\5\n\3\t\8\o\f\w\d\0\o\7\8\g\w\1\g\x\e\x\5\4\0\f\l\x\t\c\u\x\c\8\0\3\z\m\9\w\z\n\z\n\n\j\r\n\7\2\k\v\z\7\f\p\7\r\c\z\e\w\n\m\e\v\f\7\o\c\q\q\p\5\v\j\3\v\m\0\e\4\v\1\k\6\y\x\r\1\9\8\4\j\3\k\d\e\w\3\y\h\y\z\u\v\0\j\0\v\x\c\k\n\u\7\m\t\3\1\o\p\r\e\6\x\m\v\h\p\h\4\m\f\c\v\7\2\1\y\r\j\n\m\4\e\3\6\j\0\4\d\7\m\2\w\t\s\z\w\a\x\t\b\6\i\a\m\x\3\q\8\t\f\f\6\n\9\3\a\c\2\z\s\5\t\w\l\i\6\k\8\m\k\z\4\k\y\i\l\d\a\r\e\a\o\t\j\y\p\b\t\x\o\u\9\b\s\v\r\7\1\o\p\4\a\3\i\d\4\5\p\5\u\9\e\k\s\d\l\5\n\g\l\v\b\p\p\h\c\9\t\v\k\t\g\1\4\g\q\0\6\v\x\o\m\d\u\4\y\m\p\1\q\4\3\i\9\y\u\z\p\z\c\r\m\0\a\s\9\1\q\0\5\3\p\5\i\r\a\z\a\0\i\n\5\9\o\u\p\d\v\w\z\y\d\z\4\9\r\z\x\x\h\4\y\h\k\n\8\w\3\f\z\6\y\k\8\i\e\y\c\z\c\p\b\p\3\h\k\0\q\i\e\f\6\5\t\o\l\g\6\1\w\3\s\t\8\8\2\2\5\w\s\q\y\c\x\m\p\l\i\i\j\5\n\4\h\f\5\y\y\o\p\3\j\3\9\j\o\n\w\6\j\k\q\9\7\8\q\9\8\b\c\3\1\3\o\f\j\s\s\k\q\b\m\v\7\n\j\2\o\2\f\a\x\h\w\j\p\a\y\6\s\a\j\b\9\b\w\e\t\g\u\c\w\v\8\h\n\2\a\f\7\8\m\z\h\b\6\z\5\a\c\u\k\d\z\l\q\r\x\0\x\e\e\2\1\a\z\o\7\l\0\l\a\5\6\l\b\l\t\p\4\s\h\y\9\7 ]] 00:07:59.680 00:07:59.680 real 0m1.228s 00:07:59.680 user 0m0.612s 00:07:59.680 sys 0m0.289s 00:07:59.680 11:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.680 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.680 ************************************ 00:07:59.680 END TEST dd_flag_nofollow_forced_aio 00:07:59.680 ************************************ 00:07:59.940 11:04:10 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:59.940 11:04:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.940 11:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.940 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.940 ************************************ 00:07:59.940 START TEST dd_flag_noatime_forced_aio 00:07:59.940 ************************************ 00:07:59.940 11:04:10 -- common/autotest_common.sh@1114 -- # noatime 00:07:59.940 11:04:10 -- dd/posix.sh@53 -- # local atime_if 00:07:59.940 11:04:10 -- dd/posix.sh@54 -- # local atime_of 00:07:59.940 11:04:10 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:59.940 11:04:10 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.940 11:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.940 11:04:10 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.940 11:04:10 -- dd/posix.sh@60 -- # atime_if=1733483050 00:07:59.940 11:04:10 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.940 11:04:10 -- dd/posix.sh@61 -- # atime_of=1733483050 00:07:59.940 11:04:10 -- dd/posix.sh@66 -- # sleep 1 00:08:00.876 11:04:11 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.876 [2024-12-06 11:04:11.928632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.876 [2024-12-06 11:04:11.928744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70455 ] 00:08:01.134 [2024-12-06 11:04:12.071636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.134 [2024-12-06 11:04:12.111859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.134  [2024-12-06T11:04:12.540Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.393 00:08:01.393 11:04:12 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.393 11:04:12 -- dd/posix.sh@69 -- # (( atime_if == 1733483050 )) 00:08:01.393 11:04:12 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.393 11:04:12 -- dd/posix.sh@70 -- # (( atime_of == 1733483050 )) 00:08:01.393 11:04:12 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.393 [2024-12-06 11:04:12.377004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.393 [2024-12-06 11:04:12.377124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70462 ] 00:08:01.393 [2024-12-06 11:04:12.516534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.653 [2024-12-06 11:04:12.549271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.653  [2024-12-06T11:04:12.800Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.653 00:08:01.653 11:04:12 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.653 11:04:12 -- dd/posix.sh@73 -- # (( atime_if < 1733483052 )) 00:08:01.653 00:08:01.653 real 0m1.875s 00:08:01.653 user 0m0.413s 00:08:01.653 sys 0m0.217s 00:08:01.653 11:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.653 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 ************************************ 00:08:01.653 END TEST dd_flag_noatime_forced_aio 00:08:01.653 ************************************ 00:08:01.653 11:04:12 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:01.653 11:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.653 11:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.653 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 ************************************ 00:08:01.653 START TEST dd_flags_misc_forced_aio 00:08:01.653 ************************************ 00:08:01.653 11:04:12 -- common/autotest_common.sh@1114 -- # io 00:08:01.653 11:04:12 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:01.653 11:04:12 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:01.653 11:04:12 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:01.653 11:04:12 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:01.653 11:04:12 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:01.653 11:04:12 -- dd/common.sh@98 -- # xtrace_disable 00:08:01.653 11:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 11:04:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.653 11:04:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:01.912 [2024-12-06 11:04:12.831729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.912 [2024-12-06 11:04:12.831832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70487 ] 00:08:01.912 [2024-12-06 11:04:12.960381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.912 [2024-12-06 11:04:12.990278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.912  [2024-12-06T11:04:13.319Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.172 00:08:02.172 11:04:13 -- dd/posix.sh@93 -- # [[ u2v95rcfwtzyvnytr5lyy4k3ym3g70y9dxpxh688tsrbxayftr2ymu9dzpjyltgqm2ry2l9g3g4tll1o7ss7gldjjngt3w670okz3rdz5n4cqdkpury85n3wak7ambn20gcejkozj8qrb7a3y6krpeh3fp98erwss6ymju56h41xz36dr7jos5qit9o3oqbgx3eybf7iyti5zhhieu9fj3ihuq0114535qf31208loh7k96qlwc8462pubyrgrhmzbkhy5z9adrqm1vjxnrsj03tb4akv22vimxlvhgrtdo4q7f1jfc5s2uer751shnv8u8e4zx6m5jprxebvzqntmjxllqxgnsxg42qec3azjlpzunhntijg8tq10667v3osotjxi3ft7syx6f498tst17k6i33vet7iqozs2p7n0dtsxvesi7nj5wewj5101gwadf8983ou3omh55ijdca8krjx2v1cblhngvq266puguri5btzumycxpy5rhc86az == \u\2\v\9\5\r\c\f\w\t\z\y\v\n\y\t\r\5\l\y\y\4\k\3\y\m\3\g\7\0\y\9\d\x\p\x\h\6\8\8\t\s\r\b\x\a\y\f\t\r\2\y\m\u\9\d\z\p\j\y\l\t\g\q\m\2\r\y\2\l\9\g\3\g\4\t\l\l\1\o\7\s\s\7\g\l\d\j\j\n\g\t\3\w\6\7\0\o\k\z\3\r\d\z\5\n\4\c\q\d\k\p\u\r\y\8\5\n\3\w\a\k\7\a\m\b\n\2\0\g\c\e\j\k\o\z\j\8\q\r\b\7\a\3\y\6\k\r\p\e\h\3\f\p\9\8\e\r\w\s\s\6\y\m\j\u\5\6\h\4\1\x\z\3\6\d\r\7\j\o\s\5\q\i\t\9\o\3\o\q\b\g\x\3\e\y\b\f\7\i\y\t\i\5\z\h\h\i\e\u\9\f\j\3\i\h\u\q\0\1\1\4\5\3\5\q\f\3\1\2\0\8\l\o\h\7\k\9\6\q\l\w\c\8\4\6\2\p\u\b\y\r\g\r\h\m\z\b\k\h\y\5\z\9\a\d\r\q\m\1\v\j\x\n\r\s\j\0\3\t\b\4\a\k\v\2\2\v\i\m\x\l\v\h\g\r\t\d\o\4\q\7\f\1\j\f\c\5\s\2\u\e\r\7\5\1\s\h\n\v\8\u\8\e\4\z\x\6\m\5\j\p\r\x\e\b\v\z\q\n\t\m\j\x\l\l\q\x\g\n\s\x\g\4\2\q\e\c\3\a\z\j\l\p\z\u\n\h\n\t\i\j\g\8\t\q\1\0\6\6\7\v\3\o\s\o\t\j\x\i\3\f\t\7\s\y\x\6\f\4\9\8\t\s\t\1\7\k\6\i\3\3\v\e\t\7\i\q\o\z\s\2\p\7\n\0\d\t\s\x\v\e\s\i\7\n\j\5\w\e\w\j\5\1\0\1\g\w\a\d\f\8\9\8\3\o\u\3\o\m\h\5\5\i\j\d\c\a\8\k\r\j\x\2\v\1\c\b\l\h\n\g\v\q\2\6\6\p\u\g\u\r\i\5\b\t\z\u\m\y\c\x\p\y\5\r\h\c\8\6\a\z ]] 00:08:02.172 11:04:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.172 11:04:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.172 [2024-12-06 11:04:13.217385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.172 [2024-12-06 11:04:13.217504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70495 ] 00:08:02.431 [2024-12-06 11:04:13.354289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.431 [2024-12-06 11:04:13.386364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.431  [2024-12-06T11:04:13.578Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.431 00:08:02.432 11:04:13 -- dd/posix.sh@93 -- # [[ u2v95rcfwtzyvnytr5lyy4k3ym3g70y9dxpxh688tsrbxayftr2ymu9dzpjyltgqm2ry2l9g3g4tll1o7ss7gldjjngt3w670okz3rdz5n4cqdkpury85n3wak7ambn20gcejkozj8qrb7a3y6krpeh3fp98erwss6ymju56h41xz36dr7jos5qit9o3oqbgx3eybf7iyti5zhhieu9fj3ihuq0114535qf31208loh7k96qlwc8462pubyrgrhmzbkhy5z9adrqm1vjxnrsj03tb4akv22vimxlvhgrtdo4q7f1jfc5s2uer751shnv8u8e4zx6m5jprxebvzqntmjxllqxgnsxg42qec3azjlpzunhntijg8tq10667v3osotjxi3ft7syx6f498tst17k6i33vet7iqozs2p7n0dtsxvesi7nj5wewj5101gwadf8983ou3omh55ijdca8krjx2v1cblhngvq266puguri5btzumycxpy5rhc86az == \u\2\v\9\5\r\c\f\w\t\z\y\v\n\y\t\r\5\l\y\y\4\k\3\y\m\3\g\7\0\y\9\d\x\p\x\h\6\8\8\t\s\r\b\x\a\y\f\t\r\2\y\m\u\9\d\z\p\j\y\l\t\g\q\m\2\r\y\2\l\9\g\3\g\4\t\l\l\1\o\7\s\s\7\g\l\d\j\j\n\g\t\3\w\6\7\0\o\k\z\3\r\d\z\5\n\4\c\q\d\k\p\u\r\y\8\5\n\3\w\a\k\7\a\m\b\n\2\0\g\c\e\j\k\o\z\j\8\q\r\b\7\a\3\y\6\k\r\p\e\h\3\f\p\9\8\e\r\w\s\s\6\y\m\j\u\5\6\h\4\1\x\z\3\6\d\r\7\j\o\s\5\q\i\t\9\o\3\o\q\b\g\x\3\e\y\b\f\7\i\y\t\i\5\z\h\h\i\e\u\9\f\j\3\i\h\u\q\0\1\1\4\5\3\5\q\f\3\1\2\0\8\l\o\h\7\k\9\6\q\l\w\c\8\4\6\2\p\u\b\y\r\g\r\h\m\z\b\k\h\y\5\z\9\a\d\r\q\m\1\v\j\x\n\r\s\j\0\3\t\b\4\a\k\v\2\2\v\i\m\x\l\v\h\g\r\t\d\o\4\q\7\f\1\j\f\c\5\s\2\u\e\r\7\5\1\s\h\n\v\8\u\8\e\4\z\x\6\m\5\j\p\r\x\e\b\v\z\q\n\t\m\j\x\l\l\q\x\g\n\s\x\g\4\2\q\e\c\3\a\z\j\l\p\z\u\n\h\n\t\i\j\g\8\t\q\1\0\6\6\7\v\3\o\s\o\t\j\x\i\3\f\t\7\s\y\x\6\f\4\9\8\t\s\t\1\7\k\6\i\3\3\v\e\t\7\i\q\o\z\s\2\p\7\n\0\d\t\s\x\v\e\s\i\7\n\j\5\w\e\w\j\5\1\0\1\g\w\a\d\f\8\9\8\3\o\u\3\o\m\h\5\5\i\j\d\c\a\8\k\r\j\x\2\v\1\c\b\l\h\n\g\v\q\2\6\6\p\u\g\u\r\i\5\b\t\z\u\m\y\c\x\p\y\5\r\h\c\8\6\a\z ]] 00:08:02.432 11:04:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.432 11:04:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:02.691 [2024-12-06 11:04:13.620225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.691 [2024-12-06 11:04:13.620335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70502 ] 00:08:02.691 [2024-12-06 11:04:13.760475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.691 [2024-12-06 11:04:13.796882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.951  [2024-12-06T11:04:14.098Z] Copying: 512/512 [B] (average 166 kBps) 00:08:02.951 00:08:02.951 11:04:14 -- dd/posix.sh@93 -- # [[ u2v95rcfwtzyvnytr5lyy4k3ym3g70y9dxpxh688tsrbxayftr2ymu9dzpjyltgqm2ry2l9g3g4tll1o7ss7gldjjngt3w670okz3rdz5n4cqdkpury85n3wak7ambn20gcejkozj8qrb7a3y6krpeh3fp98erwss6ymju56h41xz36dr7jos5qit9o3oqbgx3eybf7iyti5zhhieu9fj3ihuq0114535qf31208loh7k96qlwc8462pubyrgrhmzbkhy5z9adrqm1vjxnrsj03tb4akv22vimxlvhgrtdo4q7f1jfc5s2uer751shnv8u8e4zx6m5jprxebvzqntmjxllqxgnsxg42qec3azjlpzunhntijg8tq10667v3osotjxi3ft7syx6f498tst17k6i33vet7iqozs2p7n0dtsxvesi7nj5wewj5101gwadf8983ou3omh55ijdca8krjx2v1cblhngvq266puguri5btzumycxpy5rhc86az == \u\2\v\9\5\r\c\f\w\t\z\y\v\n\y\t\r\5\l\y\y\4\k\3\y\m\3\g\7\0\y\9\d\x\p\x\h\6\8\8\t\s\r\b\x\a\y\f\t\r\2\y\m\u\9\d\z\p\j\y\l\t\g\q\m\2\r\y\2\l\9\g\3\g\4\t\l\l\1\o\7\s\s\7\g\l\d\j\j\n\g\t\3\w\6\7\0\o\k\z\3\r\d\z\5\n\4\c\q\d\k\p\u\r\y\8\5\n\3\w\a\k\7\a\m\b\n\2\0\g\c\e\j\k\o\z\j\8\q\r\b\7\a\3\y\6\k\r\p\e\h\3\f\p\9\8\e\r\w\s\s\6\y\m\j\u\5\6\h\4\1\x\z\3\6\d\r\7\j\o\s\5\q\i\t\9\o\3\o\q\b\g\x\3\e\y\b\f\7\i\y\t\i\5\z\h\h\i\e\u\9\f\j\3\i\h\u\q\0\1\1\4\5\3\5\q\f\3\1\2\0\8\l\o\h\7\k\9\6\q\l\w\c\8\4\6\2\p\u\b\y\r\g\r\h\m\z\b\k\h\y\5\z\9\a\d\r\q\m\1\v\j\x\n\r\s\j\0\3\t\b\4\a\k\v\2\2\v\i\m\x\l\v\h\g\r\t\d\o\4\q\7\f\1\j\f\c\5\s\2\u\e\r\7\5\1\s\h\n\v\8\u\8\e\4\z\x\6\m\5\j\p\r\x\e\b\v\z\q\n\t\m\j\x\l\l\q\x\g\n\s\x\g\4\2\q\e\c\3\a\z\j\l\p\z\u\n\h\n\t\i\j\g\8\t\q\1\0\6\6\7\v\3\o\s\o\t\j\x\i\3\f\t\7\s\y\x\6\f\4\9\8\t\s\t\1\7\k\6\i\3\3\v\e\t\7\i\q\o\z\s\2\p\7\n\0\d\t\s\x\v\e\s\i\7\n\j\5\w\e\w\j\5\1\0\1\g\w\a\d\f\8\9\8\3\o\u\3\o\m\h\5\5\i\j\d\c\a\8\k\r\j\x\2\v\1\c\b\l\h\n\g\v\q\2\6\6\p\u\g\u\r\i\5\b\t\z\u\m\y\c\x\p\y\5\r\h\c\8\6\a\z ]] 00:08:02.951 11:04:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.951 11:04:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:02.951 [2024-12-06 11:04:14.068434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.951 [2024-12-06 11:04:14.068568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70510 ] 00:08:03.210 [2024-12-06 11:04:14.209544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.210 [2024-12-06 11:04:14.248362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.210  [2024-12-06T11:04:14.617Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.470 00:08:03.470 11:04:14 -- dd/posix.sh@93 -- # [[ u2v95rcfwtzyvnytr5lyy4k3ym3g70y9dxpxh688tsrbxayftr2ymu9dzpjyltgqm2ry2l9g3g4tll1o7ss7gldjjngt3w670okz3rdz5n4cqdkpury85n3wak7ambn20gcejkozj8qrb7a3y6krpeh3fp98erwss6ymju56h41xz36dr7jos5qit9o3oqbgx3eybf7iyti5zhhieu9fj3ihuq0114535qf31208loh7k96qlwc8462pubyrgrhmzbkhy5z9adrqm1vjxnrsj03tb4akv22vimxlvhgrtdo4q7f1jfc5s2uer751shnv8u8e4zx6m5jprxebvzqntmjxllqxgnsxg42qec3azjlpzunhntijg8tq10667v3osotjxi3ft7syx6f498tst17k6i33vet7iqozs2p7n0dtsxvesi7nj5wewj5101gwadf8983ou3omh55ijdca8krjx2v1cblhngvq266puguri5btzumycxpy5rhc86az == \u\2\v\9\5\r\c\f\w\t\z\y\v\n\y\t\r\5\l\y\y\4\k\3\y\m\3\g\7\0\y\9\d\x\p\x\h\6\8\8\t\s\r\b\x\a\y\f\t\r\2\y\m\u\9\d\z\p\j\y\l\t\g\q\m\2\r\y\2\l\9\g\3\g\4\t\l\l\1\o\7\s\s\7\g\l\d\j\j\n\g\t\3\w\6\7\0\o\k\z\3\r\d\z\5\n\4\c\q\d\k\p\u\r\y\8\5\n\3\w\a\k\7\a\m\b\n\2\0\g\c\e\j\k\o\z\j\8\q\r\b\7\a\3\y\6\k\r\p\e\h\3\f\p\9\8\e\r\w\s\s\6\y\m\j\u\5\6\h\4\1\x\z\3\6\d\r\7\j\o\s\5\q\i\t\9\o\3\o\q\b\g\x\3\e\y\b\f\7\i\y\t\i\5\z\h\h\i\e\u\9\f\j\3\i\h\u\q\0\1\1\4\5\3\5\q\f\3\1\2\0\8\l\o\h\7\k\9\6\q\l\w\c\8\4\6\2\p\u\b\y\r\g\r\h\m\z\b\k\h\y\5\z\9\a\d\r\q\m\1\v\j\x\n\r\s\j\0\3\t\b\4\a\k\v\2\2\v\i\m\x\l\v\h\g\r\t\d\o\4\q\7\f\1\j\f\c\5\s\2\u\e\r\7\5\1\s\h\n\v\8\u\8\e\4\z\x\6\m\5\j\p\r\x\e\b\v\z\q\n\t\m\j\x\l\l\q\x\g\n\s\x\g\4\2\q\e\c\3\a\z\j\l\p\z\u\n\h\n\t\i\j\g\8\t\q\1\0\6\6\7\v\3\o\s\o\t\j\x\i\3\f\t\7\s\y\x\6\f\4\9\8\t\s\t\1\7\k\6\i\3\3\v\e\t\7\i\q\o\z\s\2\p\7\n\0\d\t\s\x\v\e\s\i\7\n\j\5\w\e\w\j\5\1\0\1\g\w\a\d\f\8\9\8\3\o\u\3\o\m\h\5\5\i\j\d\c\a\8\k\r\j\x\2\v\1\c\b\l\h\n\g\v\q\2\6\6\p\u\g\u\r\i\5\b\t\z\u\m\y\c\x\p\y\5\r\h\c\8\6\a\z ]] 00:08:03.470 11:04:14 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.470 11:04:14 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.470 11:04:14 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.470 11:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 11:04:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.470 11:04:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:03.470 [2024-12-06 11:04:14.498583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.470 [2024-12-06 11:04:14.498683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:08:03.730 [2024-12-06 11:04:14.633908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.730 [2024-12-06 11:04:14.665195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.730  [2024-12-06T11:04:14.877Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.730 00:08:03.730 11:04:14 -- dd/posix.sh@93 -- # [[ svya3pvtp7ou9qpchi18yzukjdvszh1feak0hdcl20k7649nqj33rdgu1xvycj0tnnix195w07afqqdylte7rsw7q2r8k7dpc7c985o40a5qls2vprw74o5mdpe7zl73neopgy37mlvs7q1rr66t02ab2chvc51qbkdjnlola5hin858nizyl7stwxjvmc99ynlfse4qx8tpxyzn4i7d9yrcxfy0zcvw6h6u3a04dhwrkv5pnci8eclta0rnyr7q2hbpdf0wjht63ssenb67yty2f2g1ko0sckbuom6990xwdqy89t2qtsz2yfxg48kteu58pov2fec1bm8yot9wzsgt21oe7nc1wav4fv6fms4yrel3f7oapz56h0rk867vn91zqenqrtl957t54b6q8ts2asfvw0j2k9sjt3vzs0cbx8ql8ci4ic5hz8jep01q598l731fzm9i668nl1b9tzad1inp61va9fq3hjhoj8tb0kqtfw6z2fe7omhv8mkb == \s\v\y\a\3\p\v\t\p\7\o\u\9\q\p\c\h\i\1\8\y\z\u\k\j\d\v\s\z\h\1\f\e\a\k\0\h\d\c\l\2\0\k\7\6\4\9\n\q\j\3\3\r\d\g\u\1\x\v\y\c\j\0\t\n\n\i\x\1\9\5\w\0\7\a\f\q\q\d\y\l\t\e\7\r\s\w\7\q\2\r\8\k\7\d\p\c\7\c\9\8\5\o\4\0\a\5\q\l\s\2\v\p\r\w\7\4\o\5\m\d\p\e\7\z\l\7\3\n\e\o\p\g\y\3\7\m\l\v\s\7\q\1\r\r\6\6\t\0\2\a\b\2\c\h\v\c\5\1\q\b\k\d\j\n\l\o\l\a\5\h\i\n\8\5\8\n\i\z\y\l\7\s\t\w\x\j\v\m\c\9\9\y\n\l\f\s\e\4\q\x\8\t\p\x\y\z\n\4\i\7\d\9\y\r\c\x\f\y\0\z\c\v\w\6\h\6\u\3\a\0\4\d\h\w\r\k\v\5\p\n\c\i\8\e\c\l\t\a\0\r\n\y\r\7\q\2\h\b\p\d\f\0\w\j\h\t\6\3\s\s\e\n\b\6\7\y\t\y\2\f\2\g\1\k\o\0\s\c\k\b\u\o\m\6\9\9\0\x\w\d\q\y\8\9\t\2\q\t\s\z\2\y\f\x\g\4\8\k\t\e\u\5\8\p\o\v\2\f\e\c\1\b\m\8\y\o\t\9\w\z\s\g\t\2\1\o\e\7\n\c\1\w\a\v\4\f\v\6\f\m\s\4\y\r\e\l\3\f\7\o\a\p\z\5\6\h\0\r\k\8\6\7\v\n\9\1\z\q\e\n\q\r\t\l\9\5\7\t\5\4\b\6\q\8\t\s\2\a\s\f\v\w\0\j\2\k\9\s\j\t\3\v\z\s\0\c\b\x\8\q\l\8\c\i\4\i\c\5\h\z\8\j\e\p\0\1\q\5\9\8\l\7\3\1\f\z\m\9\i\6\6\8\n\l\1\b\9\t\z\a\d\1\i\n\p\6\1\v\a\9\f\q\3\h\j\h\o\j\8\t\b\0\k\q\t\f\w\6\z\2\f\e\7\o\m\h\v\8\m\k\b ]] 00:08:03.730 11:04:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.730 11:04:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:03.990 [2024-12-06 11:04:14.901396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.990 [2024-12-06 11:04:14.901497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70519 ] 00:08:03.990 [2024-12-06 11:04:15.040645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.990 [2024-12-06 11:04:15.072539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.990  [2024-12-06T11:04:15.396Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.249 00:08:04.249 11:04:15 -- dd/posix.sh@93 -- # [[ svya3pvtp7ou9qpchi18yzukjdvszh1feak0hdcl20k7649nqj33rdgu1xvycj0tnnix195w07afqqdylte7rsw7q2r8k7dpc7c985o40a5qls2vprw74o5mdpe7zl73neopgy37mlvs7q1rr66t02ab2chvc51qbkdjnlola5hin858nizyl7stwxjvmc99ynlfse4qx8tpxyzn4i7d9yrcxfy0zcvw6h6u3a04dhwrkv5pnci8eclta0rnyr7q2hbpdf0wjht63ssenb67yty2f2g1ko0sckbuom6990xwdqy89t2qtsz2yfxg48kteu58pov2fec1bm8yot9wzsgt21oe7nc1wav4fv6fms4yrel3f7oapz56h0rk867vn91zqenqrtl957t54b6q8ts2asfvw0j2k9sjt3vzs0cbx8ql8ci4ic5hz8jep01q598l731fzm9i668nl1b9tzad1inp61va9fq3hjhoj8tb0kqtfw6z2fe7omhv8mkb == \s\v\y\a\3\p\v\t\p\7\o\u\9\q\p\c\h\i\1\8\y\z\u\k\j\d\v\s\z\h\1\f\e\a\k\0\h\d\c\l\2\0\k\7\6\4\9\n\q\j\3\3\r\d\g\u\1\x\v\y\c\j\0\t\n\n\i\x\1\9\5\w\0\7\a\f\q\q\d\y\l\t\e\7\r\s\w\7\q\2\r\8\k\7\d\p\c\7\c\9\8\5\o\4\0\a\5\q\l\s\2\v\p\r\w\7\4\o\5\m\d\p\e\7\z\l\7\3\n\e\o\p\g\y\3\7\m\l\v\s\7\q\1\r\r\6\6\t\0\2\a\b\2\c\h\v\c\5\1\q\b\k\d\j\n\l\o\l\a\5\h\i\n\8\5\8\n\i\z\y\l\7\s\t\w\x\j\v\m\c\9\9\y\n\l\f\s\e\4\q\x\8\t\p\x\y\z\n\4\i\7\d\9\y\r\c\x\f\y\0\z\c\v\w\6\h\6\u\3\a\0\4\d\h\w\r\k\v\5\p\n\c\i\8\e\c\l\t\a\0\r\n\y\r\7\q\2\h\b\p\d\f\0\w\j\h\t\6\3\s\s\e\n\b\6\7\y\t\y\2\f\2\g\1\k\o\0\s\c\k\b\u\o\m\6\9\9\0\x\w\d\q\y\8\9\t\2\q\t\s\z\2\y\f\x\g\4\8\k\t\e\u\5\8\p\o\v\2\f\e\c\1\b\m\8\y\o\t\9\w\z\s\g\t\2\1\o\e\7\n\c\1\w\a\v\4\f\v\6\f\m\s\4\y\r\e\l\3\f\7\o\a\p\z\5\6\h\0\r\k\8\6\7\v\n\9\1\z\q\e\n\q\r\t\l\9\5\7\t\5\4\b\6\q\8\t\s\2\a\s\f\v\w\0\j\2\k\9\s\j\t\3\v\z\s\0\c\b\x\8\q\l\8\c\i\4\i\c\5\h\z\8\j\e\p\0\1\q\5\9\8\l\7\3\1\f\z\m\9\i\6\6\8\n\l\1\b\9\t\z\a\d\1\i\n\p\6\1\v\a\9\f\q\3\h\j\h\o\j\8\t\b\0\k\q\t\f\w\6\z\2\f\e\7\o\m\h\v\8\m\k\b ]] 00:08:04.249 11:04:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.249 11:04:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.249 [2024-12-06 11:04:15.304868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.249 [2024-12-06 11:04:15.305010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70527 ] 00:08:04.507 [2024-12-06 11:04:15.442577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.507 [2024-12-06 11:04:15.473276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.507  [2024-12-06T11:04:15.913Z] Copying: 512/512 [B] (average 250 kBps) 00:08:04.766 00:08:04.766 11:04:15 -- dd/posix.sh@93 -- # [[ svya3pvtp7ou9qpchi18yzukjdvszh1feak0hdcl20k7649nqj33rdgu1xvycj0tnnix195w07afqqdylte7rsw7q2r8k7dpc7c985o40a5qls2vprw74o5mdpe7zl73neopgy37mlvs7q1rr66t02ab2chvc51qbkdjnlola5hin858nizyl7stwxjvmc99ynlfse4qx8tpxyzn4i7d9yrcxfy0zcvw6h6u3a04dhwrkv5pnci8eclta0rnyr7q2hbpdf0wjht63ssenb67yty2f2g1ko0sckbuom6990xwdqy89t2qtsz2yfxg48kteu58pov2fec1bm8yot9wzsgt21oe7nc1wav4fv6fms4yrel3f7oapz56h0rk867vn91zqenqrtl957t54b6q8ts2asfvw0j2k9sjt3vzs0cbx8ql8ci4ic5hz8jep01q598l731fzm9i668nl1b9tzad1inp61va9fq3hjhoj8tb0kqtfw6z2fe7omhv8mkb == \s\v\y\a\3\p\v\t\p\7\o\u\9\q\p\c\h\i\1\8\y\z\u\k\j\d\v\s\z\h\1\f\e\a\k\0\h\d\c\l\2\0\k\7\6\4\9\n\q\j\3\3\r\d\g\u\1\x\v\y\c\j\0\t\n\n\i\x\1\9\5\w\0\7\a\f\q\q\d\y\l\t\e\7\r\s\w\7\q\2\r\8\k\7\d\p\c\7\c\9\8\5\o\4\0\a\5\q\l\s\2\v\p\r\w\7\4\o\5\m\d\p\e\7\z\l\7\3\n\e\o\p\g\y\3\7\m\l\v\s\7\q\1\r\r\6\6\t\0\2\a\b\2\c\h\v\c\5\1\q\b\k\d\j\n\l\o\l\a\5\h\i\n\8\5\8\n\i\z\y\l\7\s\t\w\x\j\v\m\c\9\9\y\n\l\f\s\e\4\q\x\8\t\p\x\y\z\n\4\i\7\d\9\y\r\c\x\f\y\0\z\c\v\w\6\h\6\u\3\a\0\4\d\h\w\r\k\v\5\p\n\c\i\8\e\c\l\t\a\0\r\n\y\r\7\q\2\h\b\p\d\f\0\w\j\h\t\6\3\s\s\e\n\b\6\7\y\t\y\2\f\2\g\1\k\o\0\s\c\k\b\u\o\m\6\9\9\0\x\w\d\q\y\8\9\t\2\q\t\s\z\2\y\f\x\g\4\8\k\t\e\u\5\8\p\o\v\2\f\e\c\1\b\m\8\y\o\t\9\w\z\s\g\t\2\1\o\e\7\n\c\1\w\a\v\4\f\v\6\f\m\s\4\y\r\e\l\3\f\7\o\a\p\z\5\6\h\0\r\k\8\6\7\v\n\9\1\z\q\e\n\q\r\t\l\9\5\7\t\5\4\b\6\q\8\t\s\2\a\s\f\v\w\0\j\2\k\9\s\j\t\3\v\z\s\0\c\b\x\8\q\l\8\c\i\4\i\c\5\h\z\8\j\e\p\0\1\q\5\9\8\l\7\3\1\f\z\m\9\i\6\6\8\n\l\1\b\9\t\z\a\d\1\i\n\p\6\1\v\a\9\f\q\3\h\j\h\o\j\8\t\b\0\k\q\t\f\w\6\z\2\f\e\7\o\m\h\v\8\m\k\b ]] 00:08:04.766 11:04:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.766 11:04:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:04.766 [2024-12-06 11:04:15.703696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.766 [2024-12-06 11:04:15.703813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70534 ] 00:08:04.766 [2024-12-06 11:04:15.840019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.766 [2024-12-06 11:04:15.870120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.025  [2024-12-06T11:04:16.172Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.025 00:08:05.025 11:04:16 -- dd/posix.sh@93 -- # [[ svya3pvtp7ou9qpchi18yzukjdvszh1feak0hdcl20k7649nqj33rdgu1xvycj0tnnix195w07afqqdylte7rsw7q2r8k7dpc7c985o40a5qls2vprw74o5mdpe7zl73neopgy37mlvs7q1rr66t02ab2chvc51qbkdjnlola5hin858nizyl7stwxjvmc99ynlfse4qx8tpxyzn4i7d9yrcxfy0zcvw6h6u3a04dhwrkv5pnci8eclta0rnyr7q2hbpdf0wjht63ssenb67yty2f2g1ko0sckbuom6990xwdqy89t2qtsz2yfxg48kteu58pov2fec1bm8yot9wzsgt21oe7nc1wav4fv6fms4yrel3f7oapz56h0rk867vn91zqenqrtl957t54b6q8ts2asfvw0j2k9sjt3vzs0cbx8ql8ci4ic5hz8jep01q598l731fzm9i668nl1b9tzad1inp61va9fq3hjhoj8tb0kqtfw6z2fe7omhv8mkb == \s\v\y\a\3\p\v\t\p\7\o\u\9\q\p\c\h\i\1\8\y\z\u\k\j\d\v\s\z\h\1\f\e\a\k\0\h\d\c\l\2\0\k\7\6\4\9\n\q\j\3\3\r\d\g\u\1\x\v\y\c\j\0\t\n\n\i\x\1\9\5\w\0\7\a\f\q\q\d\y\l\t\e\7\r\s\w\7\q\2\r\8\k\7\d\p\c\7\c\9\8\5\o\4\0\a\5\q\l\s\2\v\p\r\w\7\4\o\5\m\d\p\e\7\z\l\7\3\n\e\o\p\g\y\3\7\m\l\v\s\7\q\1\r\r\6\6\t\0\2\a\b\2\c\h\v\c\5\1\q\b\k\d\j\n\l\o\l\a\5\h\i\n\8\5\8\n\i\z\y\l\7\s\t\w\x\j\v\m\c\9\9\y\n\l\f\s\e\4\q\x\8\t\p\x\y\z\n\4\i\7\d\9\y\r\c\x\f\y\0\z\c\v\w\6\h\6\u\3\a\0\4\d\h\w\r\k\v\5\p\n\c\i\8\e\c\l\t\a\0\r\n\y\r\7\q\2\h\b\p\d\f\0\w\j\h\t\6\3\s\s\e\n\b\6\7\y\t\y\2\f\2\g\1\k\o\0\s\c\k\b\u\o\m\6\9\9\0\x\w\d\q\y\8\9\t\2\q\t\s\z\2\y\f\x\g\4\8\k\t\e\u\5\8\p\o\v\2\f\e\c\1\b\m\8\y\o\t\9\w\z\s\g\t\2\1\o\e\7\n\c\1\w\a\v\4\f\v\6\f\m\s\4\y\r\e\l\3\f\7\o\a\p\z\5\6\h\0\r\k\8\6\7\v\n\9\1\z\q\e\n\q\r\t\l\9\5\7\t\5\4\b\6\q\8\t\s\2\a\s\f\v\w\0\j\2\k\9\s\j\t\3\v\z\s\0\c\b\x\8\q\l\8\c\i\4\i\c\5\h\z\8\j\e\p\0\1\q\5\9\8\l\7\3\1\f\z\m\9\i\6\6\8\n\l\1\b\9\t\z\a\d\1\i\n\p\6\1\v\a\9\f\q\3\h\j\h\o\j\8\t\b\0\k\q\t\f\w\6\z\2\f\e\7\o\m\h\v\8\m\k\b ]] 00:08:05.025 00:08:05.025 real 0m3.282s 00:08:05.025 user 0m1.571s 00:08:05.025 sys 0m0.736s 00:08:05.025 ************************************ 00:08:05.025 END TEST dd_flags_misc_forced_aio 00:08:05.025 ************************************ 00:08:05.025 11:04:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.025 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:05.025 11:04:16 -- dd/posix.sh@1 -- # cleanup 00:08:05.025 11:04:16 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.025 11:04:16 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.025 00:08:05.025 real 0m15.936s 00:08:05.025 user 0m6.686s 00:08:05.025 sys 0m3.403s 00:08:05.025 11:04:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.025 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:05.025 ************************************ 00:08:05.025 END TEST spdk_dd_posix 00:08:05.025 ************************************ 00:08:05.025 11:04:16 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.025 11:04:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.025 11:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.025 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:05.025 ************************************ 00:08:05.025 START TEST spdk_dd_malloc 00:08:05.025 ************************************ 00:08:05.025 11:04:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.285 * Looking for test storage... 00:08:05.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:05.285 11:04:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:05.285 11:04:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:05.285 11:04:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:05.285 11:04:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:05.285 11:04:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:05.285 11:04:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:05.285 11:04:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:05.285 11:04:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:05.285 11:04:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:05.285 11:04:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.285 11:04:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:05.285 11:04:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:05.285 11:04:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:05.285 11:04:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:05.285 11:04:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:05.285 11:04:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:05.285 11:04:16 -- scripts/common.sh@344 -- # : 1 00:08:05.285 11:04:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:05.285 11:04:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.285 11:04:16 -- scripts/common.sh@364 -- # decimal 1 00:08:05.285 11:04:16 -- scripts/common.sh@352 -- # local d=1 00:08:05.285 11:04:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.285 11:04:16 -- scripts/common.sh@354 -- # echo 1 00:08:05.285 11:04:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:05.285 11:04:16 -- scripts/common.sh@365 -- # decimal 2 00:08:05.285 11:04:16 -- scripts/common.sh@352 -- # local d=2 00:08:05.285 11:04:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.285 11:04:16 -- scripts/common.sh@354 -- # echo 2 00:08:05.285 11:04:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:05.285 11:04:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:05.285 11:04:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:05.285 11:04:16 -- scripts/common.sh@367 -- # return 0 00:08:05.285 11:04:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.285 11:04:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:05.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.285 --rc genhtml_branch_coverage=1 00:08:05.285 --rc genhtml_function_coverage=1 00:08:05.285 --rc genhtml_legend=1 00:08:05.285 --rc geninfo_all_blocks=1 00:08:05.285 --rc geninfo_unexecuted_blocks=1 00:08:05.285 00:08:05.285 ' 00:08:05.285 11:04:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:05.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.285 --rc genhtml_branch_coverage=1 00:08:05.285 --rc genhtml_function_coverage=1 00:08:05.285 --rc genhtml_legend=1 00:08:05.285 --rc geninfo_all_blocks=1 00:08:05.285 --rc geninfo_unexecuted_blocks=1 00:08:05.285 00:08:05.285 ' 00:08:05.285 11:04:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:05.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.285 --rc genhtml_branch_coverage=1 00:08:05.285 --rc genhtml_function_coverage=1 00:08:05.285 --rc genhtml_legend=1 00:08:05.285 --rc geninfo_all_blocks=1 00:08:05.285 --rc geninfo_unexecuted_blocks=1 00:08:05.285 00:08:05.285 ' 00:08:05.285 11:04:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:05.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.285 --rc genhtml_branch_coverage=1 00:08:05.285 --rc genhtml_function_coverage=1 00:08:05.285 --rc genhtml_legend=1 00:08:05.285 --rc geninfo_all_blocks=1 00:08:05.285 --rc geninfo_unexecuted_blocks=1 00:08:05.285 00:08:05.285 ' 00:08:05.285 11:04:16 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.285 11:04:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.285 11:04:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.285 11:04:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.285 11:04:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.285 11:04:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.285 11:04:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.285 11:04:16 -- paths/export.sh@5 -- # export PATH 00:08:05.286 11:04:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.286 11:04:16 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:05.286 11:04:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.286 11:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.286 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:05.286 ************************************ 00:08:05.286 START TEST dd_malloc_copy 00:08:05.286 ************************************ 00:08:05.286 11:04:16 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:05.286 11:04:16 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:05.286 11:04:16 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:05.286 11:04:16 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.286 11:04:16 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:05.286 11:04:16 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.286 11:04:16 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:05.286 11:04:16 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:05.286 11:04:16 -- dd/malloc.sh@28 -- # gen_conf 00:08:05.286 11:04:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.286 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:08:05.286 [2024-12-06 11:04:16.380355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.286 [2024-12-06 11:04:16.380498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70610 ] 00:08:05.286 { 00:08:05.286 "subsystems": [ 00:08:05.286 { 00:08:05.286 "subsystem": "bdev", 00:08:05.286 "config": [ 00:08:05.286 { 00:08:05.286 "params": { 00:08:05.286 "block_size": 512, 00:08:05.286 "num_blocks": 1048576, 00:08:05.286 "name": "malloc0" 00:08:05.286 }, 00:08:05.286 "method": "bdev_malloc_create" 00:08:05.286 }, 00:08:05.286 { 00:08:05.286 "params": { 00:08:05.286 "block_size": 512, 00:08:05.286 "num_blocks": 1048576, 00:08:05.286 "name": "malloc1" 00:08:05.286 }, 00:08:05.286 "method": "bdev_malloc_create" 00:08:05.286 }, 00:08:05.286 { 00:08:05.286 "method": "bdev_wait_for_examine" 00:08:05.286 } 00:08:05.286 ] 00:08:05.286 } 00:08:05.286 ] 00:08:05.286 } 00:08:05.546 [2024-12-06 11:04:16.521455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.546 [2024-12-06 11:04:16.559885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.925  [2024-12-06T11:04:19.011Z] Copying: 224/512 [MB] (224 MBps) [2024-12-06T11:04:19.011Z] Copying: 463/512 [MB] (239 MBps) [2024-12-06T11:04:19.580Z] Copying: 512/512 [MB] (average 233 MBps) 00:08:08.433 00:08:08.433 11:04:19 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:08.433 11:04:19 -- dd/malloc.sh@33 -- # gen_conf 00:08:08.433 11:04:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.433 11:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.433 [2024-12-06 11:04:19.410478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.433 [2024-12-06 11:04:19.410588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70652 ] 00:08:08.433 { 00:08:08.433 "subsystems": [ 00:08:08.433 { 00:08:08.433 "subsystem": "bdev", 00:08:08.433 "config": [ 00:08:08.433 { 00:08:08.433 "params": { 00:08:08.433 "block_size": 512, 00:08:08.433 "num_blocks": 1048576, 00:08:08.433 "name": "malloc0" 00:08:08.433 }, 00:08:08.433 "method": "bdev_malloc_create" 00:08:08.433 }, 00:08:08.433 { 00:08:08.433 "params": { 00:08:08.433 "block_size": 512, 00:08:08.433 "num_blocks": 1048576, 00:08:08.433 "name": "malloc1" 00:08:08.433 }, 00:08:08.433 "method": "bdev_malloc_create" 00:08:08.433 }, 00:08:08.433 { 00:08:08.433 "method": "bdev_wait_for_examine" 00:08:08.433 } 00:08:08.433 ] 00:08:08.433 } 00:08:08.433 ] 00:08:08.433 } 00:08:08.433 [2024-12-06 11:04:19.550640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.692 [2024-12-06 11:04:19.588116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.067  [2024-12-06T11:04:21.844Z] Copying: 232/512 [MB] (232 MBps) [2024-12-06T11:04:22.129Z] Copying: 462/512 [MB] (229 MBps) [2024-12-06T11:04:22.697Z] Copying: 512/512 [MB] (average 226 MBps) 00:08:11.550 00:08:11.550 00:08:11.550 real 0m6.078s 00:08:11.550 user 0m5.397s 00:08:11.550 sys 0m0.528s 00:08:11.550 ************************************ 00:08:11.550 END TEST dd_malloc_copy 00:08:11.550 ************************************ 00:08:11.550 11:04:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.550 11:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:11.550 00:08:11.550 real 0m6.292s 00:08:11.550 user 0m5.508s 00:08:11.550 sys 0m0.633s 00:08:11.550 11:04:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.550 11:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:11.550 ************************************ 00:08:11.550 END TEST spdk_dd_malloc 00:08:11.550 ************************************ 00:08:11.550 11:04:22 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:11.550 11:04:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:11.550 11:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.550 11:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:11.550 ************************************ 00:08:11.550 START TEST spdk_dd_bdev_to_bdev 00:08:11.550 ************************************ 00:08:11.550 11:04:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:11.550 * Looking for test storage... 00:08:11.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:11.550 11:04:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.550 11:04:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.550 11:04:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.550 11:04:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.550 11:04:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.550 11:04:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.550 11:04:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.550 11:04:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.550 11:04:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.550 11:04:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.550 11:04:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.550 11:04:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.550 11:04:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.550 11:04:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.550 11:04:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.550 11:04:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.550 11:04:22 -- scripts/common.sh@344 -- # : 1 00:08:11.550 11:04:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.550 11:04:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.550 11:04:22 -- scripts/common.sh@364 -- # decimal 1 00:08:11.550 11:04:22 -- scripts/common.sh@352 -- # local d=1 00:08:11.550 11:04:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.550 11:04:22 -- scripts/common.sh@354 -- # echo 1 00:08:11.550 11:04:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.550 11:04:22 -- scripts/common.sh@365 -- # decimal 2 00:08:11.550 11:04:22 -- scripts/common.sh@352 -- # local d=2 00:08:11.550 11:04:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.550 11:04:22 -- scripts/common.sh@354 -- # echo 2 00:08:11.550 11:04:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.550 11:04:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.550 11:04:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.550 11:04:22 -- scripts/common.sh@367 -- # return 0 00:08:11.550 11:04:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.550 11:04:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.550 --rc genhtml_branch_coverage=1 00:08:11.550 --rc genhtml_function_coverage=1 00:08:11.550 --rc genhtml_legend=1 00:08:11.550 --rc geninfo_all_blocks=1 00:08:11.550 --rc geninfo_unexecuted_blocks=1 00:08:11.550 00:08:11.550 ' 00:08:11.550 11:04:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.550 --rc genhtml_branch_coverage=1 00:08:11.550 --rc genhtml_function_coverage=1 00:08:11.550 --rc genhtml_legend=1 00:08:11.550 --rc geninfo_all_blocks=1 00:08:11.550 --rc geninfo_unexecuted_blocks=1 00:08:11.550 00:08:11.550 ' 00:08:11.550 11:04:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.551 --rc genhtml_branch_coverage=1 00:08:11.551 --rc genhtml_function_coverage=1 00:08:11.551 --rc genhtml_legend=1 00:08:11.551 --rc geninfo_all_blocks=1 00:08:11.551 --rc geninfo_unexecuted_blocks=1 00:08:11.551 00:08:11.551 ' 00:08:11.551 11:04:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.551 --rc genhtml_branch_coverage=1 00:08:11.551 --rc genhtml_function_coverage=1 00:08:11.551 --rc genhtml_legend=1 00:08:11.551 --rc geninfo_all_blocks=1 00:08:11.551 --rc geninfo_unexecuted_blocks=1 00:08:11.551 00:08:11.551 ' 00:08:11.551 11:04:22 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.551 11:04:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.551 11:04:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.551 11:04:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.551 11:04:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.551 11:04:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.551 11:04:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.551 11:04:22 -- paths/export.sh@5 -- # export PATH 00:08:11.551 11:04:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:11.551 11:04:22 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:11.551 11:04:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:11.551 11:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.551 11:04:22 -- common/autotest_common.sh@10 -- # set +x 00:08:11.810 ************************************ 00:08:11.810 START TEST dd_inflate_file 00:08:11.810 ************************************ 00:08:11.810 11:04:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:11.810 [2024-12-06 11:04:22.750447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.810 [2024-12-06 11:04:22.750578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70758 ] 00:08:11.810 [2024-12-06 11:04:22.887826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.810 [2024-12-06 11:04:22.922807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.067  [2024-12-06T11:04:23.214Z] Copying: 64/64 [MB] (average 2206 MBps) 00:08:12.067 00:08:12.067 00:08:12.067 real 0m0.433s 00:08:12.067 user 0m0.191s 00:08:12.067 sys 0m0.126s 00:08:12.067 11:04:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.067 ************************************ 00:08:12.067 END TEST dd_inflate_file 00:08:12.067 ************************************ 00:08:12.067 11:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:12.067 11:04:23 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:12.067 11:04:23 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:12.067 11:04:23 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.067 11:04:23 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:12.067 11:04:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:12.067 11:04:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.067 11:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.067 11:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:12.067 11:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:12.067 ************************************ 00:08:12.067 START TEST dd_copy_to_out_bdev 00:08:12.067 ************************************ 00:08:12.067 11:04:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.325 { 00:08:12.325 "subsystems": [ 00:08:12.325 { 00:08:12.325 "subsystem": "bdev", 00:08:12.325 "config": [ 00:08:12.325 { 00:08:12.325 "params": { 00:08:12.325 "trtype": "pcie", 00:08:12.325 "traddr": "0000:00:06.0", 00:08:12.325 "name": "Nvme0" 00:08:12.325 }, 00:08:12.325 "method": "bdev_nvme_attach_controller" 00:08:12.325 }, 00:08:12.325 { 00:08:12.325 "params": { 00:08:12.325 "trtype": "pcie", 00:08:12.325 "traddr": "0000:00:07.0", 00:08:12.325 "name": "Nvme1" 00:08:12.325 }, 00:08:12.325 "method": "bdev_nvme_attach_controller" 00:08:12.325 }, 00:08:12.325 { 00:08:12.325 "method": "bdev_wait_for_examine" 00:08:12.325 } 00:08:12.325 ] 00:08:12.325 } 00:08:12.325 ] 00:08:12.325 } 00:08:12.325 [2024-12-06 11:04:23.234390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.325 [2024-12-06 11:04:23.234474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70795 ] 00:08:12.325 [2024-12-06 11:04:23.373418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.325 [2024-12-06 11:04:23.405384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.700  [2024-12-06T11:04:25.107Z] Copying: 47/64 [MB] (47 MBps) [2024-12-06T11:04:25.107Z] Copying: 64/64 [MB] (average 49 MBps) 00:08:13.960 00:08:13.960 00:08:13.960 real 0m1.915s 00:08:13.960 user 0m1.664s 00:08:13.960 sys 0m0.177s 00:08:13.960 11:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.960 ************************************ 00:08:13.960 END TEST dd_copy_to_out_bdev 00:08:13.960 ************************************ 00:08:13.960 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:14.218 11:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.218 11:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.218 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:14.218 ************************************ 00:08:14.218 START TEST dd_offset_magic 00:08:14.218 ************************************ 00:08:14.218 11:04:25 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:14.218 11:04:25 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:14.218 11:04:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.218 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:14.218 [2024-12-06 11:04:25.201118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.218 [2024-12-06 11:04:25.201208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70839 ] 00:08:14.218 { 00:08:14.218 "subsystems": [ 00:08:14.218 { 00:08:14.218 "subsystem": "bdev", 00:08:14.218 "config": [ 00:08:14.218 { 00:08:14.218 "params": { 00:08:14.218 "trtype": "pcie", 00:08:14.218 "traddr": "0000:00:06.0", 00:08:14.218 "name": "Nvme0" 00:08:14.218 }, 00:08:14.218 "method": "bdev_nvme_attach_controller" 00:08:14.218 }, 00:08:14.218 { 00:08:14.218 "params": { 00:08:14.218 "trtype": "pcie", 00:08:14.218 "traddr": "0000:00:07.0", 00:08:14.218 "name": "Nvme1" 00:08:14.218 }, 00:08:14.218 "method": "bdev_nvme_attach_controller" 00:08:14.218 }, 00:08:14.218 { 00:08:14.218 "method": "bdev_wait_for_examine" 00:08:14.218 } 00:08:14.218 ] 00:08:14.218 } 00:08:14.218 ] 00:08:14.218 } 00:08:14.218 [2024-12-06 11:04:25.330251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.218 [2024-12-06 11:04:25.362707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.475  [2024-12-06T11:04:25.879Z] Copying: 65/65 [MB] (average 984 MBps) 00:08:14.732 00:08:14.732 11:04:25 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:14.733 11:04:25 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:14.733 11:04:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.733 11:04:25 -- common/autotest_common.sh@10 -- # set +x 00:08:14.733 [2024-12-06 11:04:25.809813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.733 [2024-12-06 11:04:25.809916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:08:14.733 { 00:08:14.733 "subsystems": [ 00:08:14.733 { 00:08:14.733 "subsystem": "bdev", 00:08:14.733 "config": [ 00:08:14.733 { 00:08:14.733 "params": { 00:08:14.733 "trtype": "pcie", 00:08:14.733 "traddr": "0000:00:06.0", 00:08:14.733 "name": "Nvme0" 00:08:14.733 }, 00:08:14.733 "method": "bdev_nvme_attach_controller" 00:08:14.733 }, 00:08:14.733 { 00:08:14.733 "params": { 00:08:14.733 "trtype": "pcie", 00:08:14.733 "traddr": "0000:00:07.0", 00:08:14.733 "name": "Nvme1" 00:08:14.733 }, 00:08:14.733 "method": "bdev_nvme_attach_controller" 00:08:14.733 }, 00:08:14.733 { 00:08:14.733 "method": "bdev_wait_for_examine" 00:08:14.733 } 00:08:14.733 ] 00:08:14.733 } 00:08:14.733 ] 00:08:14.733 } 00:08:14.990 [2024-12-06 11:04:25.945019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.990 [2024-12-06 11:04:25.978411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.247  [2024-12-06T11:04:26.394Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.247 00:08:15.247 11:04:26 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:15.247 11:04:26 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:15.247 11:04:26 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:15.247 11:04:26 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:15.247 11:04:26 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:15.247 11:04:26 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.247 11:04:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 [2024-12-06 11:04:26.344740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.247 [2024-12-06 11:04:26.344840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:08:15.247 { 00:08:15.247 "subsystems": [ 00:08:15.247 { 00:08:15.247 "subsystem": "bdev", 00:08:15.247 "config": [ 00:08:15.247 { 00:08:15.247 "params": { 00:08:15.247 "trtype": "pcie", 00:08:15.247 "traddr": "0000:00:06.0", 00:08:15.247 "name": "Nvme0" 00:08:15.247 }, 00:08:15.247 "method": "bdev_nvme_attach_controller" 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "params": { 00:08:15.247 "trtype": "pcie", 00:08:15.247 "traddr": "0000:00:07.0", 00:08:15.247 "name": "Nvme1" 00:08:15.247 }, 00:08:15.247 "method": "bdev_nvme_attach_controller" 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "method": "bdev_wait_for_examine" 00:08:15.247 } 00:08:15.247 ] 00:08:15.247 } 00:08:15.247 ] 00:08:15.247 } 00:08:15.505 [2024-12-06 11:04:26.483405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.505 [2024-12-06 11:04:26.513567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.763  [2024-12-06T11:04:26.910Z] Copying: 65/65 [MB] (average 1101 MBps) 00:08:15.763 00:08:15.763 11:04:26 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:15.763 11:04:26 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:15.763 11:04:26 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.763 11:04:26 -- common/autotest_common.sh@10 -- # set +x 00:08:16.022 [2024-12-06 11:04:26.942109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.022 [2024-12-06 11:04:26.942212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:08:16.022 { 00:08:16.022 "subsystems": [ 00:08:16.022 { 00:08:16.022 "subsystem": "bdev", 00:08:16.022 "config": [ 00:08:16.022 { 00:08:16.022 "params": { 00:08:16.022 "trtype": "pcie", 00:08:16.022 "traddr": "0000:00:06.0", 00:08:16.022 "name": "Nvme0" 00:08:16.022 }, 00:08:16.022 "method": "bdev_nvme_attach_controller" 00:08:16.022 }, 00:08:16.022 { 00:08:16.022 "params": { 00:08:16.022 "trtype": "pcie", 00:08:16.022 "traddr": "0000:00:07.0", 00:08:16.022 "name": "Nvme1" 00:08:16.022 }, 00:08:16.022 "method": "bdev_nvme_attach_controller" 00:08:16.022 }, 00:08:16.022 { 00:08:16.022 "method": "bdev_wait_for_examine" 00:08:16.022 } 00:08:16.022 ] 00:08:16.022 } 00:08:16.022 ] 00:08:16.022 } 00:08:16.022 [2024-12-06 11:04:27.074659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.022 [2024-12-06 11:04:27.112746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.281  [2024-12-06T11:04:27.687Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.540 00:08:16.540 11:04:27 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:16.540 11:04:27 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:16.540 00:08:16.540 real 0m2.297s 00:08:16.540 user 0m1.651s 00:08:16.540 sys 0m0.451s 00:08:16.540 11:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.540 ************************************ 00:08:16.540 END TEST dd_offset_magic 00:08:16.540 ************************************ 00:08:16.540 11:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:16.540 11:04:27 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:16.540 11:04:27 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:16.540 11:04:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:16.540 11:04:27 -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.540 11:04:27 -- dd/common.sh@12 -- # local size=4194330 00:08:16.540 11:04:27 -- dd/common.sh@14 -- # local bs=1048576 00:08:16.540 11:04:27 -- dd/common.sh@15 -- # local count=5 00:08:16.540 11:04:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:16.540 11:04:27 -- dd/common.sh@18 -- # gen_conf 00:08:16.540 11:04:27 -- dd/common.sh@31 -- # xtrace_disable 00:08:16.540 11:04:27 -- common/autotest_common.sh@10 -- # set +x 00:08:16.540 [2024-12-06 11:04:27.550381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.540 [2024-12-06 11:04:27.550476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:08:16.540 { 00:08:16.540 "subsystems": [ 00:08:16.540 { 00:08:16.540 "subsystem": "bdev", 00:08:16.540 "config": [ 00:08:16.540 { 00:08:16.540 "params": { 00:08:16.540 "trtype": "pcie", 00:08:16.540 "traddr": "0000:00:06.0", 00:08:16.540 "name": "Nvme0" 00:08:16.540 }, 00:08:16.540 "method": "bdev_nvme_attach_controller" 00:08:16.540 }, 00:08:16.540 { 00:08:16.540 "params": { 00:08:16.540 "trtype": "pcie", 00:08:16.540 "traddr": "0000:00:07.0", 00:08:16.540 "name": "Nvme1" 00:08:16.540 }, 00:08:16.540 "method": "bdev_nvme_attach_controller" 00:08:16.540 }, 00:08:16.540 { 00:08:16.540 "method": "bdev_wait_for_examine" 00:08:16.540 } 00:08:16.540 ] 00:08:16.540 } 00:08:16.540 ] 00:08:16.540 } 00:08:16.799 [2024-12-06 11:04:27.693535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.799 [2024-12-06 11:04:27.724560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.799  [2024-12-06T11:04:28.205Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:08:17.058 00:08:17.058 11:04:28 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:17.058 11:04:28 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:17.058 11:04:28 -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.058 11:04:28 -- dd/common.sh@12 -- # local size=4194330 00:08:17.058 11:04:28 -- dd/common.sh@14 -- # local bs=1048576 00:08:17.058 11:04:28 -- dd/common.sh@15 -- # local count=5 00:08:17.058 11:04:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:17.058 11:04:28 -- dd/common.sh@18 -- # gen_conf 00:08:17.058 11:04:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:17.058 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.058 [2024-12-06 11:04:28.082851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.058 [2024-12-06 11:04:28.082971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:08:17.058 { 00:08:17.058 "subsystems": [ 00:08:17.058 { 00:08:17.058 "subsystem": "bdev", 00:08:17.058 "config": [ 00:08:17.058 { 00:08:17.058 "params": { 00:08:17.058 "trtype": "pcie", 00:08:17.058 "traddr": "0000:00:06.0", 00:08:17.058 "name": "Nvme0" 00:08:17.058 }, 00:08:17.058 "method": "bdev_nvme_attach_controller" 00:08:17.058 }, 00:08:17.058 { 00:08:17.058 "params": { 00:08:17.058 "trtype": "pcie", 00:08:17.058 "traddr": "0000:00:07.0", 00:08:17.058 "name": "Nvme1" 00:08:17.058 }, 00:08:17.058 "method": "bdev_nvme_attach_controller" 00:08:17.058 }, 00:08:17.058 { 00:08:17.058 "method": "bdev_wait_for_examine" 00:08:17.058 } 00:08:17.058 ] 00:08:17.058 } 00:08:17.058 ] 00:08:17.058 } 00:08:17.318 [2024-12-06 11:04:28.216062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.318 [2024-12-06 11:04:28.255962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.318  [2024-12-06T11:04:28.724Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:17.577 00:08:17.577 11:04:28 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:17.577 00:08:17.577 real 0m6.113s 00:08:17.577 user 0m4.434s 00:08:17.577 sys 0m1.185s 00:08:17.577 11:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.577 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.577 ************************************ 00:08:17.577 END TEST spdk_dd_bdev_to_bdev 00:08:17.577 ************************************ 00:08:17.577 11:04:28 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:17.577 11:04:28 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:17.577 11:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.577 11:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.577 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.577 ************************************ 00:08:17.577 START TEST spdk_dd_uring 00:08:17.577 ************************************ 00:08:17.577 11:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:17.837 * Looking for test storage... 00:08:17.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.837 11:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.837 11:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.837 11:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.837 11:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.837 11:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.837 11:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.837 11:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.837 11:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.837 11:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.837 11:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.837 11:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.837 11:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.837 11:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.837 11:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.837 11:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.837 11:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.837 11:04:28 -- scripts/common.sh@344 -- # : 1 00:08:17.837 11:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.837 11:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.837 11:04:28 -- scripts/common.sh@364 -- # decimal 1 00:08:17.837 11:04:28 -- scripts/common.sh@352 -- # local d=1 00:08:17.837 11:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.837 11:04:28 -- scripts/common.sh@354 -- # echo 1 00:08:17.837 11:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.837 11:04:28 -- scripts/common.sh@365 -- # decimal 2 00:08:17.837 11:04:28 -- scripts/common.sh@352 -- # local d=2 00:08:17.837 11:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.837 11:04:28 -- scripts/common.sh@354 -- # echo 2 00:08:17.837 11:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.837 11:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.837 11:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.837 11:04:28 -- scripts/common.sh@367 -- # return 0 00:08:17.837 11:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.837 11:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.837 --rc genhtml_branch_coverage=1 00:08:17.837 --rc genhtml_function_coverage=1 00:08:17.837 --rc genhtml_legend=1 00:08:17.837 --rc geninfo_all_blocks=1 00:08:17.837 --rc geninfo_unexecuted_blocks=1 00:08:17.837 00:08:17.837 ' 00:08:17.837 11:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.837 --rc genhtml_branch_coverage=1 00:08:17.837 --rc genhtml_function_coverage=1 00:08:17.837 --rc genhtml_legend=1 00:08:17.837 --rc geninfo_all_blocks=1 00:08:17.837 --rc geninfo_unexecuted_blocks=1 00:08:17.837 00:08:17.837 ' 00:08:17.837 11:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.837 --rc genhtml_branch_coverage=1 00:08:17.837 --rc genhtml_function_coverage=1 00:08:17.837 --rc genhtml_legend=1 00:08:17.837 --rc geninfo_all_blocks=1 00:08:17.837 --rc geninfo_unexecuted_blocks=1 00:08:17.837 00:08:17.837 ' 00:08:17.837 11:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.837 --rc genhtml_branch_coverage=1 00:08:17.837 --rc genhtml_function_coverage=1 00:08:17.837 --rc genhtml_legend=1 00:08:17.837 --rc geninfo_all_blocks=1 00:08:17.837 --rc geninfo_unexecuted_blocks=1 00:08:17.837 00:08:17.837 ' 00:08:17.837 11:04:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.837 11:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.837 11:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.837 11:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.837 11:04:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.837 11:04:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.837 11:04:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.837 11:04:28 -- paths/export.sh@5 -- # export PATH 00:08:17.837 11:04:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.837 11:04:28 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:17.838 11:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.838 11:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.838 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 ************************************ 00:08:17.838 START TEST dd_uring_copy 00:08:17.838 ************************************ 00:08:17.838 11:04:28 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:17.838 11:04:28 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:17.838 11:04:28 -- dd/uring.sh@16 -- # local magic 00:08:17.838 11:04:28 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:17.838 11:04:28 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:17.838 11:04:28 -- dd/uring.sh@19 -- # local verify_magic 00:08:17.838 11:04:28 -- dd/uring.sh@21 -- # init_zram 00:08:17.838 11:04:28 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:17.838 11:04:28 -- dd/common.sh@164 -- # return 00:08:17.838 11:04:28 -- dd/uring.sh@22 -- # create_zram_dev 00:08:17.838 11:04:28 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:17.838 11:04:28 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:17.838 11:04:28 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:17.838 11:04:28 -- dd/common.sh@181 -- # local id=1 00:08:17.838 11:04:28 -- dd/common.sh@182 -- # local size=512M 00:08:17.838 11:04:28 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:17.838 11:04:28 -- dd/common.sh@186 -- # echo 512M 00:08:17.838 11:04:28 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:17.838 11:04:28 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:17.838 11:04:28 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:17.838 11:04:28 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:17.838 11:04:28 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:17.838 11:04:28 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:17.838 11:04:28 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:17.838 11:04:28 -- dd/common.sh@98 -- # xtrace_disable 00:08:17.838 11:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.838 11:04:28 -- dd/uring.sh@41 -- # magic=1kx0vgdzkkma9imeqkhcr1avpkrugk8yzvagnew0wj3jlg2gcazqirpkp1xf4oha8lsustsnk6134uyzas0gk9grcfcoffx3gh19pm1d7mhsqr1klossu85u3gvxp9ei1pxtk5rawocb71mb97xnx5t4gorpmauy2bcn90ry2ira1e7hs2rc1f59xo8z6tdly7oywxac1fyob4fozlski5mvs8dch4s912yomi4rmobjdr0vj8r8sxdhu0kbmsmm6emu9gpaj4gqhpc1swbp6l1bpzc45bqvax7a7iheclnvsqlg8rvcmrt8fzjceejmlqyxamjxi706m6pmd46ccj43xvbnt1to74esm709xaxtes664k0vw32zyhrf93cgke4fg5ekbjdb8ja7lpgzblyo6qg5xpqntmvezcdshds7e0tcbgsgbjv43e1dc4yc3rx0n60jdt194k8a9ajdhe2yuqnzj7k866frypy9puh5xio26an0i6t7t74vcchwf3s2vbof7wajl315i8m5qzxgo68hwr5kkctmtu0uk9n06dfjgq7rnmuovdow2kg1ow1fsmaaux4dnwvkk40wxhsw6e3epk11krneedqkz7x908wfeq3buwiai2mjewu53ifmef9z0xc696hjqxatbt63tgiq02dhbwlvsis32zkdnnq3yctfn26x64ghds81gmnmfh4q8ev6saphqyenews2kip0hpcysza6am3iwuxj6gbqx5tn8xn90bw2ozvast1ndzzvf0dwu9yn9wfg0puyty60dy26ct9h28qlfn4e0tsdj17ker5ou28wuj99uuezmjd6byw26dp5uxwce28x9ajohbfsjef1ygwaykebyoqddjda7wnxk6uquki7f1a60ru8pcyar8dd47kryjtgd5x4ilq3oorl6hww29t8qi9vl20j4nhaihxhtm97nehvna9la374iabq276knu92ahp1dw0kpug5g1bc5y4juomor0pi9y6yzhsumd5j 00:08:17.838 11:04:28 -- dd/uring.sh@42 -- # echo 1kx0vgdzkkma9imeqkhcr1avpkrugk8yzvagnew0wj3jlg2gcazqirpkp1xf4oha8lsustsnk6134uyzas0gk9grcfcoffx3gh19pm1d7mhsqr1klossu85u3gvxp9ei1pxtk5rawocb71mb97xnx5t4gorpmauy2bcn90ry2ira1e7hs2rc1f59xo8z6tdly7oywxac1fyob4fozlski5mvs8dch4s912yomi4rmobjdr0vj8r8sxdhu0kbmsmm6emu9gpaj4gqhpc1swbp6l1bpzc45bqvax7a7iheclnvsqlg8rvcmrt8fzjceejmlqyxamjxi706m6pmd46ccj43xvbnt1to74esm709xaxtes664k0vw32zyhrf93cgke4fg5ekbjdb8ja7lpgzblyo6qg5xpqntmvezcdshds7e0tcbgsgbjv43e1dc4yc3rx0n60jdt194k8a9ajdhe2yuqnzj7k866frypy9puh5xio26an0i6t7t74vcchwf3s2vbof7wajl315i8m5qzxgo68hwr5kkctmtu0uk9n06dfjgq7rnmuovdow2kg1ow1fsmaaux4dnwvkk40wxhsw6e3epk11krneedqkz7x908wfeq3buwiai2mjewu53ifmef9z0xc696hjqxatbt63tgiq02dhbwlvsis32zkdnnq3yctfn26x64ghds81gmnmfh4q8ev6saphqyenews2kip0hpcysza6am3iwuxj6gbqx5tn8xn90bw2ozvast1ndzzvf0dwu9yn9wfg0puyty60dy26ct9h28qlfn4e0tsdj17ker5ou28wuj99uuezmjd6byw26dp5uxwce28x9ajohbfsjef1ygwaykebyoqddjda7wnxk6uquki7f1a60ru8pcyar8dd47kryjtgd5x4ilq3oorl6hww29t8qi9vl20j4nhaihxhtm97nehvna9la374iabq276knu92ahp1dw0kpug5g1bc5y4juomor0pi9y6yzhsumd5j 00:08:17.838 11:04:28 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:17.838 [2024-12-06 11:04:28.942983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.838 [2024-12-06 11:04:28.943076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70997 ] 00:08:18.098 [2024-12-06 11:04:29.082073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.098 [2024-12-06 11:04:29.119652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.668  [2024-12-06T11:04:30.075Z] Copying: 511/511 [MB] (average 1458 MBps) 00:08:18.928 00:08:18.928 11:04:29 -- dd/uring.sh@54 -- # gen_conf 00:08:18.928 11:04:29 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:18.928 11:04:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:18.928 11:04:29 -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 [2024-12-06 11:04:29.941722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.928 [2024-12-06 11:04:29.941814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:08:18.928 { 00:08:18.928 "subsystems": [ 00:08:18.928 { 00:08:18.928 "subsystem": "bdev", 00:08:18.928 "config": [ 00:08:18.928 { 00:08:18.928 "params": { 00:08:18.928 "block_size": 512, 00:08:18.928 "num_blocks": 1048576, 00:08:18.928 "name": "malloc0" 00:08:18.928 }, 00:08:18.928 "method": "bdev_malloc_create" 00:08:18.928 }, 00:08:18.928 { 00:08:18.928 "params": { 00:08:18.928 "filename": "/dev/zram1", 00:08:18.928 "name": "uring0" 00:08:18.928 }, 00:08:18.928 "method": "bdev_uring_create" 00:08:18.928 }, 00:08:18.928 { 00:08:18.928 "method": "bdev_wait_for_examine" 00:08:18.928 } 00:08:18.928 ] 00:08:18.928 } 00:08:18.928 ] 00:08:18.928 } 00:08:19.186 [2024-12-06 11:04:30.081840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.186 [2024-12-06 11:04:30.122654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.566  [2024-12-06T11:04:32.650Z] Copying: 178/512 [MB] (178 MBps) [2024-12-06T11:04:33.220Z] Copying: 362/512 [MB] (183 MBps) [2024-12-06T11:04:33.492Z] Copying: 512/512 [MB] (average 181 MBps) 00:08:22.345 00:08:22.345 11:04:33 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:22.345 11:04:33 -- dd/uring.sh@60 -- # gen_conf 00:08:22.345 11:04:33 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.345 11:04:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.345 [2024-12-06 11:04:33.437579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.345 [2024-12-06 11:04:33.437677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:08:22.345 { 00:08:22.345 "subsystems": [ 00:08:22.345 { 00:08:22.345 "subsystem": "bdev", 00:08:22.345 "config": [ 00:08:22.345 { 00:08:22.345 "params": { 00:08:22.345 "block_size": 512, 00:08:22.345 "num_blocks": 1048576, 00:08:22.345 "name": "malloc0" 00:08:22.345 }, 00:08:22.345 "method": "bdev_malloc_create" 00:08:22.345 }, 00:08:22.345 { 00:08:22.345 "params": { 00:08:22.345 "filename": "/dev/zram1", 00:08:22.345 "name": "uring0" 00:08:22.345 }, 00:08:22.345 "method": "bdev_uring_create" 00:08:22.345 }, 00:08:22.345 { 00:08:22.345 "method": "bdev_wait_for_examine" 00:08:22.345 } 00:08:22.345 ] 00:08:22.345 } 00:08:22.345 ] 00:08:22.345 } 00:08:22.604 [2024-12-06 11:04:33.578840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.605 [2024-12-06 11:04:33.618354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.984  [2024-12-06T11:04:36.064Z] Copying: 132/512 [MB] (132 MBps) [2024-12-06T11:04:37.001Z] Copying: 261/512 [MB] (128 MBps) [2024-12-06T11:04:37.938Z] Copying: 396/512 [MB] (135 MBps) [2024-12-06T11:04:38.197Z] Copying: 512/512 [MB] (average 129 MBps) 00:08:27.050 00:08:27.050 11:04:38 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:27.051 11:04:38 -- dd/uring.sh@66 -- # [[ 1kx0vgdzkkma9imeqkhcr1avpkrugk8yzvagnew0wj3jlg2gcazqirpkp1xf4oha8lsustsnk6134uyzas0gk9grcfcoffx3gh19pm1d7mhsqr1klossu85u3gvxp9ei1pxtk5rawocb71mb97xnx5t4gorpmauy2bcn90ry2ira1e7hs2rc1f59xo8z6tdly7oywxac1fyob4fozlski5mvs8dch4s912yomi4rmobjdr0vj8r8sxdhu0kbmsmm6emu9gpaj4gqhpc1swbp6l1bpzc45bqvax7a7iheclnvsqlg8rvcmrt8fzjceejmlqyxamjxi706m6pmd46ccj43xvbnt1to74esm709xaxtes664k0vw32zyhrf93cgke4fg5ekbjdb8ja7lpgzblyo6qg5xpqntmvezcdshds7e0tcbgsgbjv43e1dc4yc3rx0n60jdt194k8a9ajdhe2yuqnzj7k866frypy9puh5xio26an0i6t7t74vcchwf3s2vbof7wajl315i8m5qzxgo68hwr5kkctmtu0uk9n06dfjgq7rnmuovdow2kg1ow1fsmaaux4dnwvkk40wxhsw6e3epk11krneedqkz7x908wfeq3buwiai2mjewu53ifmef9z0xc696hjqxatbt63tgiq02dhbwlvsis32zkdnnq3yctfn26x64ghds81gmnmfh4q8ev6saphqyenews2kip0hpcysza6am3iwuxj6gbqx5tn8xn90bw2ozvast1ndzzvf0dwu9yn9wfg0puyty60dy26ct9h28qlfn4e0tsdj17ker5ou28wuj99uuezmjd6byw26dp5uxwce28x9ajohbfsjef1ygwaykebyoqddjda7wnxk6uquki7f1a60ru8pcyar8dd47kryjtgd5x4ilq3oorl6hww29t8qi9vl20j4nhaihxhtm97nehvna9la374iabq276knu92ahp1dw0kpug5g1bc5y4juomor0pi9y6yzhsumd5j == \1\k\x\0\v\g\d\z\k\k\m\a\9\i\m\e\q\k\h\c\r\1\a\v\p\k\r\u\g\k\8\y\z\v\a\g\n\e\w\0\w\j\3\j\l\g\2\g\c\a\z\q\i\r\p\k\p\1\x\f\4\o\h\a\8\l\s\u\s\t\s\n\k\6\1\3\4\u\y\z\a\s\0\g\k\9\g\r\c\f\c\o\f\f\x\3\g\h\1\9\p\m\1\d\7\m\h\s\q\r\1\k\l\o\s\s\u\8\5\u\3\g\v\x\p\9\e\i\1\p\x\t\k\5\r\a\w\o\c\b\7\1\m\b\9\7\x\n\x\5\t\4\g\o\r\p\m\a\u\y\2\b\c\n\9\0\r\y\2\i\r\a\1\e\7\h\s\2\r\c\1\f\5\9\x\o\8\z\6\t\d\l\y\7\o\y\w\x\a\c\1\f\y\o\b\4\f\o\z\l\s\k\i\5\m\v\s\8\d\c\h\4\s\9\1\2\y\o\m\i\4\r\m\o\b\j\d\r\0\v\j\8\r\8\s\x\d\h\u\0\k\b\m\s\m\m\6\e\m\u\9\g\p\a\j\4\g\q\h\p\c\1\s\w\b\p\6\l\1\b\p\z\c\4\5\b\q\v\a\x\7\a\7\i\h\e\c\l\n\v\s\q\l\g\8\r\v\c\m\r\t\8\f\z\j\c\e\e\j\m\l\q\y\x\a\m\j\x\i\7\0\6\m\6\p\m\d\4\6\c\c\j\4\3\x\v\b\n\t\1\t\o\7\4\e\s\m\7\0\9\x\a\x\t\e\s\6\6\4\k\0\v\w\3\2\z\y\h\r\f\9\3\c\g\k\e\4\f\g\5\e\k\b\j\d\b\8\j\a\7\l\p\g\z\b\l\y\o\6\q\g\5\x\p\q\n\t\m\v\e\z\c\d\s\h\d\s\7\e\0\t\c\b\g\s\g\b\j\v\4\3\e\1\d\c\4\y\c\3\r\x\0\n\6\0\j\d\t\1\9\4\k\8\a\9\a\j\d\h\e\2\y\u\q\n\z\j\7\k\8\6\6\f\r\y\p\y\9\p\u\h\5\x\i\o\2\6\a\n\0\i\6\t\7\t\7\4\v\c\c\h\w\f\3\s\2\v\b\o\f\7\w\a\j\l\3\1\5\i\8\m\5\q\z\x\g\o\6\8\h\w\r\5\k\k\c\t\m\t\u\0\u\k\9\n\0\6\d\f\j\g\q\7\r\n\m\u\o\v\d\o\w\2\k\g\1\o\w\1\f\s\m\a\a\u\x\4\d\n\w\v\k\k\4\0\w\x\h\s\w\6\e\3\e\p\k\1\1\k\r\n\e\e\d\q\k\z\7\x\9\0\8\w\f\e\q\3\b\u\w\i\a\i\2\m\j\e\w\u\5\3\i\f\m\e\f\9\z\0\x\c\6\9\6\h\j\q\x\a\t\b\t\6\3\t\g\i\q\0\2\d\h\b\w\l\v\s\i\s\3\2\z\k\d\n\n\q\3\y\c\t\f\n\2\6\x\6\4\g\h\d\s\8\1\g\m\n\m\f\h\4\q\8\e\v\6\s\a\p\h\q\y\e\n\e\w\s\2\k\i\p\0\h\p\c\y\s\z\a\6\a\m\3\i\w\u\x\j\6\g\b\q\x\5\t\n\8\x\n\9\0\b\w\2\o\z\v\a\s\t\1\n\d\z\z\v\f\0\d\w\u\9\y\n\9\w\f\g\0\p\u\y\t\y\6\0\d\y\2\6\c\t\9\h\2\8\q\l\f\n\4\e\0\t\s\d\j\1\7\k\e\r\5\o\u\2\8\w\u\j\9\9\u\u\e\z\m\j\d\6\b\y\w\2\6\d\p\5\u\x\w\c\e\2\8\x\9\a\j\o\h\b\f\s\j\e\f\1\y\g\w\a\y\k\e\b\y\o\q\d\d\j\d\a\7\w\n\x\k\6\u\q\u\k\i\7\f\1\a\6\0\r\u\8\p\c\y\a\r\8\d\d\4\7\k\r\y\j\t\g\d\5\x\4\i\l\q\3\o\o\r\l\6\h\w\w\2\9\t\8\q\i\9\v\l\2\0\j\4\n\h\a\i\h\x\h\t\m\9\7\n\e\h\v\n\a\9\l\a\3\7\4\i\a\b\q\2\7\6\k\n\u\9\2\a\h\p\1\d\w\0\k\p\u\g\5\g\1\b\c\5\y\4\j\u\o\m\o\r\0\p\i\9\y\6\y\z\h\s\u\m\d\5\j ]] 00:08:27.051 11:04:38 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:27.051 11:04:38 -- dd/uring.sh@69 -- # [[ 1kx0vgdzkkma9imeqkhcr1avpkrugk8yzvagnew0wj3jlg2gcazqirpkp1xf4oha8lsustsnk6134uyzas0gk9grcfcoffx3gh19pm1d7mhsqr1klossu85u3gvxp9ei1pxtk5rawocb71mb97xnx5t4gorpmauy2bcn90ry2ira1e7hs2rc1f59xo8z6tdly7oywxac1fyob4fozlski5mvs8dch4s912yomi4rmobjdr0vj8r8sxdhu0kbmsmm6emu9gpaj4gqhpc1swbp6l1bpzc45bqvax7a7iheclnvsqlg8rvcmrt8fzjceejmlqyxamjxi706m6pmd46ccj43xvbnt1to74esm709xaxtes664k0vw32zyhrf93cgke4fg5ekbjdb8ja7lpgzblyo6qg5xpqntmvezcdshds7e0tcbgsgbjv43e1dc4yc3rx0n60jdt194k8a9ajdhe2yuqnzj7k866frypy9puh5xio26an0i6t7t74vcchwf3s2vbof7wajl315i8m5qzxgo68hwr5kkctmtu0uk9n06dfjgq7rnmuovdow2kg1ow1fsmaaux4dnwvkk40wxhsw6e3epk11krneedqkz7x908wfeq3buwiai2mjewu53ifmef9z0xc696hjqxatbt63tgiq02dhbwlvsis32zkdnnq3yctfn26x64ghds81gmnmfh4q8ev6saphqyenews2kip0hpcysza6am3iwuxj6gbqx5tn8xn90bw2ozvast1ndzzvf0dwu9yn9wfg0puyty60dy26ct9h28qlfn4e0tsdj17ker5ou28wuj99uuezmjd6byw26dp5uxwce28x9ajohbfsjef1ygwaykebyoqddjda7wnxk6uquki7f1a60ru8pcyar8dd47kryjtgd5x4ilq3oorl6hww29t8qi9vl20j4nhaihxhtm97nehvna9la374iabq276knu92ahp1dw0kpug5g1bc5y4juomor0pi9y6yzhsumd5j == \1\k\x\0\v\g\d\z\k\k\m\a\9\i\m\e\q\k\h\c\r\1\a\v\p\k\r\u\g\k\8\y\z\v\a\g\n\e\w\0\w\j\3\j\l\g\2\g\c\a\z\q\i\r\p\k\p\1\x\f\4\o\h\a\8\l\s\u\s\t\s\n\k\6\1\3\4\u\y\z\a\s\0\g\k\9\g\r\c\f\c\o\f\f\x\3\g\h\1\9\p\m\1\d\7\m\h\s\q\r\1\k\l\o\s\s\u\8\5\u\3\g\v\x\p\9\e\i\1\p\x\t\k\5\r\a\w\o\c\b\7\1\m\b\9\7\x\n\x\5\t\4\g\o\r\p\m\a\u\y\2\b\c\n\9\0\r\y\2\i\r\a\1\e\7\h\s\2\r\c\1\f\5\9\x\o\8\z\6\t\d\l\y\7\o\y\w\x\a\c\1\f\y\o\b\4\f\o\z\l\s\k\i\5\m\v\s\8\d\c\h\4\s\9\1\2\y\o\m\i\4\r\m\o\b\j\d\r\0\v\j\8\r\8\s\x\d\h\u\0\k\b\m\s\m\m\6\e\m\u\9\g\p\a\j\4\g\q\h\p\c\1\s\w\b\p\6\l\1\b\p\z\c\4\5\b\q\v\a\x\7\a\7\i\h\e\c\l\n\v\s\q\l\g\8\r\v\c\m\r\t\8\f\z\j\c\e\e\j\m\l\q\y\x\a\m\j\x\i\7\0\6\m\6\p\m\d\4\6\c\c\j\4\3\x\v\b\n\t\1\t\o\7\4\e\s\m\7\0\9\x\a\x\t\e\s\6\6\4\k\0\v\w\3\2\z\y\h\r\f\9\3\c\g\k\e\4\f\g\5\e\k\b\j\d\b\8\j\a\7\l\p\g\z\b\l\y\o\6\q\g\5\x\p\q\n\t\m\v\e\z\c\d\s\h\d\s\7\e\0\t\c\b\g\s\g\b\j\v\4\3\e\1\d\c\4\y\c\3\r\x\0\n\6\0\j\d\t\1\9\4\k\8\a\9\a\j\d\h\e\2\y\u\q\n\z\j\7\k\8\6\6\f\r\y\p\y\9\p\u\h\5\x\i\o\2\6\a\n\0\i\6\t\7\t\7\4\v\c\c\h\w\f\3\s\2\v\b\o\f\7\w\a\j\l\3\1\5\i\8\m\5\q\z\x\g\o\6\8\h\w\r\5\k\k\c\t\m\t\u\0\u\k\9\n\0\6\d\f\j\g\q\7\r\n\m\u\o\v\d\o\w\2\k\g\1\o\w\1\f\s\m\a\a\u\x\4\d\n\w\v\k\k\4\0\w\x\h\s\w\6\e\3\e\p\k\1\1\k\r\n\e\e\d\q\k\z\7\x\9\0\8\w\f\e\q\3\b\u\w\i\a\i\2\m\j\e\w\u\5\3\i\f\m\e\f\9\z\0\x\c\6\9\6\h\j\q\x\a\t\b\t\6\3\t\g\i\q\0\2\d\h\b\w\l\v\s\i\s\3\2\z\k\d\n\n\q\3\y\c\t\f\n\2\6\x\6\4\g\h\d\s\8\1\g\m\n\m\f\h\4\q\8\e\v\6\s\a\p\h\q\y\e\n\e\w\s\2\k\i\p\0\h\p\c\y\s\z\a\6\a\m\3\i\w\u\x\j\6\g\b\q\x\5\t\n\8\x\n\9\0\b\w\2\o\z\v\a\s\t\1\n\d\z\z\v\f\0\d\w\u\9\y\n\9\w\f\g\0\p\u\y\t\y\6\0\d\y\2\6\c\t\9\h\2\8\q\l\f\n\4\e\0\t\s\d\j\1\7\k\e\r\5\o\u\2\8\w\u\j\9\9\u\u\e\z\m\j\d\6\b\y\w\2\6\d\p\5\u\x\w\c\e\2\8\x\9\a\j\o\h\b\f\s\j\e\f\1\y\g\w\a\y\k\e\b\y\o\q\d\d\j\d\a\7\w\n\x\k\6\u\q\u\k\i\7\f\1\a\6\0\r\u\8\p\c\y\a\r\8\d\d\4\7\k\r\y\j\t\g\d\5\x\4\i\l\q\3\o\o\r\l\6\h\w\w\2\9\t\8\q\i\9\v\l\2\0\j\4\n\h\a\i\h\x\h\t\m\9\7\n\e\h\v\n\a\9\l\a\3\7\4\i\a\b\q\2\7\6\k\n\u\9\2\a\h\p\1\d\w\0\k\p\u\g\5\g\1\b\c\5\y\4\j\u\o\m\o\r\0\p\i\9\y\6\y\z\h\s\u\m\d\5\j ]] 00:08:27.051 11:04:38 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:27.310 11:04:38 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:27.310 11:04:38 -- dd/uring.sh@75 -- # gen_conf 00:08:27.310 11:04:38 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.310 11:04:38 -- common/autotest_common.sh@10 -- # set +x 00:08:27.569 [2024-12-06 11:04:38.486292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.570 [2024-12-06 11:04:38.486398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:08:27.570 { 00:08:27.570 "subsystems": [ 00:08:27.570 { 00:08:27.570 "subsystem": "bdev", 00:08:27.570 "config": [ 00:08:27.570 { 00:08:27.570 "params": { 00:08:27.570 "block_size": 512, 00:08:27.570 "num_blocks": 1048576, 00:08:27.570 "name": "malloc0" 00:08:27.570 }, 00:08:27.570 "method": "bdev_malloc_create" 00:08:27.570 }, 00:08:27.570 { 00:08:27.570 "params": { 00:08:27.570 "filename": "/dev/zram1", 00:08:27.570 "name": "uring0" 00:08:27.570 }, 00:08:27.570 "method": "bdev_uring_create" 00:08:27.570 }, 00:08:27.570 { 00:08:27.570 "method": "bdev_wait_for_examine" 00:08:27.570 } 00:08:27.570 ] 00:08:27.570 } 00:08:27.570 ] 00:08:27.570 } 00:08:27.570 [2024-12-06 11:04:38.621925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.570 [2024-12-06 11:04:38.664253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.949  [2024-12-06T11:04:41.033Z] Copying: 130/512 [MB] (130 MBps) [2024-12-06T11:04:41.970Z] Copying: 292/512 [MB] (162 MBps) [2024-12-06T11:04:42.538Z] Copying: 445/512 [MB] (153 MBps) [2024-12-06T11:04:42.538Z] Copying: 512/512 [MB] (average 149 MBps) 00:08:31.391 00:08:31.391 11:04:42 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:31.391 11:04:42 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:31.391 11:04:42 -- dd/uring.sh@87 -- # : 00:08:31.391 11:04:42 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:31.391 11:04:42 -- dd/uring.sh@87 -- # : 00:08:31.391 11:04:42 -- dd/uring.sh@87 -- # gen_conf 00:08:31.391 11:04:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.391 11:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:31.391 [2024-12-06 11:04:42.522985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.391 [2024-12-06 11:04:42.523075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:08:31.391 { 00:08:31.391 "subsystems": [ 00:08:31.391 { 00:08:31.391 "subsystem": "bdev", 00:08:31.391 "config": [ 00:08:31.391 { 00:08:31.391 "params": { 00:08:31.391 "block_size": 512, 00:08:31.391 "num_blocks": 1048576, 00:08:31.391 "name": "malloc0" 00:08:31.391 }, 00:08:31.391 "method": "bdev_malloc_create" 00:08:31.391 }, 00:08:31.391 { 00:08:31.391 "params": { 00:08:31.391 "filename": "/dev/zram1", 00:08:31.391 "name": "uring0" 00:08:31.391 }, 00:08:31.391 "method": "bdev_uring_create" 00:08:31.391 }, 00:08:31.391 { 00:08:31.391 "params": { 00:08:31.391 "name": "uring0" 00:08:31.391 }, 00:08:31.391 "method": "bdev_uring_delete" 00:08:31.391 }, 00:08:31.391 { 00:08:31.391 "method": "bdev_wait_for_examine" 00:08:31.391 } 00:08:31.391 ] 00:08:31.391 } 00:08:31.391 ] 00:08:31.391 } 00:08:31.650 [2024-12-06 11:04:42.657649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.650 [2024-12-06 11:04:42.691984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.909  [2024-12-06T11:04:43.314Z] Copying: 0/0 [B] (average 0 Bps) 00:08:32.167 00:08:32.167 11:04:43 -- dd/uring.sh@94 -- # : 00:08:32.167 11:04:43 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:32.167 11:04:43 -- common/autotest_common.sh@650 -- # local es=0 00:08:32.167 11:04:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:32.167 11:04:43 -- dd/uring.sh@94 -- # gen_conf 00:08:32.167 11:04:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.167 11:04:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.167 11:04:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 11:04:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.167 11:04:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.167 11:04:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.167 11:04:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.167 11:04:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.167 11:04:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.168 11:04:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.168 11:04:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:32.168 [2024-12-06 11:04:43.181936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.168 [2024-12-06 11:04:43.182036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71218 ] 00:08:32.168 { 00:08:32.168 "subsystems": [ 00:08:32.168 { 00:08:32.168 "subsystem": "bdev", 00:08:32.168 "config": [ 00:08:32.168 { 00:08:32.168 "params": { 00:08:32.168 "block_size": 512, 00:08:32.168 "num_blocks": 1048576, 00:08:32.168 "name": "malloc0" 00:08:32.168 }, 00:08:32.168 "method": "bdev_malloc_create" 00:08:32.168 }, 00:08:32.168 { 00:08:32.168 "params": { 00:08:32.168 "filename": "/dev/zram1", 00:08:32.168 "name": "uring0" 00:08:32.168 }, 00:08:32.168 "method": "bdev_uring_create" 00:08:32.168 }, 00:08:32.168 { 00:08:32.168 "params": { 00:08:32.168 "name": "uring0" 00:08:32.168 }, 00:08:32.168 "method": "bdev_uring_delete" 00:08:32.168 }, 00:08:32.168 { 00:08:32.168 "method": "bdev_wait_for_examine" 00:08:32.168 } 00:08:32.168 ] 00:08:32.168 } 00:08:32.168 ] 00:08:32.168 } 00:08:32.427 [2024-12-06 11:04:43.324080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.427 [2024-12-06 11:04:43.366994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.427 [2024-12-06 11:04:43.532157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:32.427 [2024-12-06 11:04:43.532217] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:32.427 [2024-12-06 11:04:43.532242] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:32.427 [2024-12-06 11:04:43.532256] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.684 [2024-12-06 11:04:43.717899] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:32.684 11:04:43 -- common/autotest_common.sh@653 -- # es=237 00:08:32.684 11:04:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.684 11:04:43 -- common/autotest_common.sh@662 -- # es=109 00:08:32.684 11:04:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:32.684 11:04:43 -- common/autotest_common.sh@670 -- # es=1 00:08:32.684 11:04:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.684 11:04:43 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:32.684 11:04:43 -- dd/common.sh@172 -- # local id=1 00:08:32.684 11:04:43 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:32.684 11:04:43 -- dd/common.sh@176 -- # echo 1 00:08:32.684 11:04:43 -- dd/common.sh@177 -- # echo 1 00:08:32.684 11:04:43 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:32.942 00:08:32.942 real 0m15.166s 00:08:32.942 user 0m8.588s 00:08:32.942 sys 0m5.903s 00:08:32.942 11:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.942 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:32.942 ************************************ 00:08:32.942 END TEST dd_uring_copy 00:08:32.942 ************************************ 00:08:32.942 00:08:32.942 real 0m15.403s 00:08:32.942 user 0m8.737s 00:08:32.942 sys 0m5.999s 00:08:32.942 ************************************ 00:08:32.942 END TEST spdk_dd_uring 00:08:32.942 ************************************ 00:08:32.942 11:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.942 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.201 11:04:44 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:33.201 11:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.201 11:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.201 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.201 ************************************ 00:08:33.201 START TEST spdk_dd_sparse 00:08:33.201 ************************************ 00:08:33.202 11:04:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:33.202 * Looking for test storage... 00:08:33.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:33.202 11:04:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:33.202 11:04:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.202 11:04:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:33.202 11:04:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:33.202 11:04:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:33.202 11:04:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:33.202 11:04:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:33.202 11:04:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:33.202 11:04:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:33.202 11:04:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.202 11:04:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:33.202 11:04:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:33.202 11:04:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:33.202 11:04:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:33.202 11:04:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:33.202 11:04:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:33.202 11:04:44 -- scripts/common.sh@344 -- # : 1 00:08:33.202 11:04:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:33.202 11:04:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.202 11:04:44 -- scripts/common.sh@364 -- # decimal 1 00:08:33.202 11:04:44 -- scripts/common.sh@352 -- # local d=1 00:08:33.202 11:04:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.202 11:04:44 -- scripts/common.sh@354 -- # echo 1 00:08:33.202 11:04:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:33.202 11:04:44 -- scripts/common.sh@365 -- # decimal 2 00:08:33.202 11:04:44 -- scripts/common.sh@352 -- # local d=2 00:08:33.202 11:04:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.202 11:04:44 -- scripts/common.sh@354 -- # echo 2 00:08:33.202 11:04:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:33.202 11:04:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:33.202 11:04:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:33.202 11:04:44 -- scripts/common.sh@367 -- # return 0 00:08:33.202 11:04:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.202 11:04:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.202 --rc genhtml_branch_coverage=1 00:08:33.202 --rc genhtml_function_coverage=1 00:08:33.202 --rc genhtml_legend=1 00:08:33.202 --rc geninfo_all_blocks=1 00:08:33.202 --rc geninfo_unexecuted_blocks=1 00:08:33.202 00:08:33.202 ' 00:08:33.202 11:04:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.202 --rc genhtml_branch_coverage=1 00:08:33.202 --rc genhtml_function_coverage=1 00:08:33.202 --rc genhtml_legend=1 00:08:33.202 --rc geninfo_all_blocks=1 00:08:33.202 --rc geninfo_unexecuted_blocks=1 00:08:33.202 00:08:33.202 ' 00:08:33.202 11:04:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.202 --rc genhtml_branch_coverage=1 00:08:33.202 --rc genhtml_function_coverage=1 00:08:33.202 --rc genhtml_legend=1 00:08:33.202 --rc geninfo_all_blocks=1 00:08:33.202 --rc geninfo_unexecuted_blocks=1 00:08:33.202 00:08:33.202 ' 00:08:33.202 11:04:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.202 --rc genhtml_branch_coverage=1 00:08:33.202 --rc genhtml_function_coverage=1 00:08:33.202 --rc genhtml_legend=1 00:08:33.202 --rc geninfo_all_blocks=1 00:08:33.202 --rc geninfo_unexecuted_blocks=1 00:08:33.202 00:08:33.202 ' 00:08:33.202 11:04:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.202 11:04:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.202 11:04:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.202 11:04:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.202 11:04:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.202 11:04:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.202 11:04:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.202 11:04:44 -- paths/export.sh@5 -- # export PATH 00:08:33.202 11:04:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.202 11:04:44 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:33.202 11:04:44 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:33.202 11:04:44 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:33.202 11:04:44 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:33.202 11:04:44 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:33.202 11:04:44 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:33.202 11:04:44 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:33.202 11:04:44 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:33.202 11:04:44 -- dd/sparse.sh@118 -- # prepare 00:08:33.202 11:04:44 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:33.202 11:04:44 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:33.202 1+0 records in 00:08:33.202 1+0 records out 00:08:33.202 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00610541 s, 687 MB/s 00:08:33.202 11:04:44 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:33.202 1+0 records in 00:08:33.202 1+0 records out 00:08:33.202 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00562461 s, 746 MB/s 00:08:33.202 11:04:44 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:33.202 1+0 records in 00:08:33.202 1+0 records out 00:08:33.202 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00522378 s, 803 MB/s 00:08:33.461 11:04:44 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:33.462 11:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.462 11:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.462 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.462 ************************************ 00:08:33.462 START TEST dd_sparse_file_to_file 00:08:33.462 ************************************ 00:08:33.462 11:04:44 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:33.462 11:04:44 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:33.462 11:04:44 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:33.462 11:04:44 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:33.462 11:04:44 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:33.462 11:04:44 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:33.462 11:04:44 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:33.462 11:04:44 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:33.462 11:04:44 -- dd/sparse.sh@41 -- # gen_conf 00:08:33.462 11:04:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.462 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.462 [2024-12-06 11:04:44.406946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.462 [2024-12-06 11:04:44.407068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71311 ] 00:08:33.462 { 00:08:33.462 "subsystems": [ 00:08:33.462 { 00:08:33.462 "subsystem": "bdev", 00:08:33.462 "config": [ 00:08:33.462 { 00:08:33.462 "params": { 00:08:33.462 "block_size": 4096, 00:08:33.462 "filename": "dd_sparse_aio_disk", 00:08:33.462 "name": "dd_aio" 00:08:33.462 }, 00:08:33.462 "method": "bdev_aio_create" 00:08:33.462 }, 00:08:33.462 { 00:08:33.462 "params": { 00:08:33.462 "lvs_name": "dd_lvstore", 00:08:33.462 "bdev_name": "dd_aio" 00:08:33.462 }, 00:08:33.462 "method": "bdev_lvol_create_lvstore" 00:08:33.462 }, 00:08:33.462 { 00:08:33.462 "method": "bdev_wait_for_examine" 00:08:33.462 } 00:08:33.462 ] 00:08:33.462 } 00:08:33.462 ] 00:08:33.462 } 00:08:33.462 [2024-12-06 11:04:44.544259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.462 [2024-12-06 11:04:44.580752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.720  [2024-12-06T11:04:44.867Z] Copying: 12/36 [MB] (average 1714 MBps) 00:08:33.720 00:08:33.720 11:04:44 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:33.720 11:04:44 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:33.720 11:04:44 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:33.720 11:04:44 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:33.720 11:04:44 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:33.720 11:04:44 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:33.720 11:04:44 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:33.720 11:04:44 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:33.720 11:04:44 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:33.720 11:04:44 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:33.720 00:08:33.720 real 0m0.489s 00:08:33.720 user 0m0.260s 00:08:33.720 sys 0m0.136s 00:08:33.720 11:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.720 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.720 ************************************ 00:08:33.720 END TEST dd_sparse_file_to_file 00:08:33.720 ************************************ 00:08:33.978 11:04:44 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:33.978 11:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.978 11:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.978 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 ************************************ 00:08:33.978 START TEST dd_sparse_file_to_bdev 00:08:33.978 ************************************ 00:08:33.978 11:04:44 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:33.978 11:04:44 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:33.978 11:04:44 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:33.978 11:04:44 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:33.978 11:04:44 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:33.978 11:04:44 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:33.978 11:04:44 -- dd/sparse.sh@73 -- # gen_conf 00:08:33.978 11:04:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.978 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 [2024-12-06 11:04:44.947795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.978 [2024-12-06 11:04:44.947886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:08:33.978 { 00:08:33.978 "subsystems": [ 00:08:33.978 { 00:08:33.978 "subsystem": "bdev", 00:08:33.978 "config": [ 00:08:33.978 { 00:08:33.978 "params": { 00:08:33.978 "block_size": 4096, 00:08:33.978 "filename": "dd_sparse_aio_disk", 00:08:33.978 "name": "dd_aio" 00:08:33.978 }, 00:08:33.978 "method": "bdev_aio_create" 00:08:33.978 }, 00:08:33.978 { 00:08:33.978 "params": { 00:08:33.978 "lvs_name": "dd_lvstore", 00:08:33.978 "lvol_name": "dd_lvol", 00:08:33.978 "size": 37748736, 00:08:33.978 "thin_provision": true 00:08:33.978 }, 00:08:33.978 "method": "bdev_lvol_create" 00:08:33.978 }, 00:08:33.978 { 00:08:33.978 "method": "bdev_wait_for_examine" 00:08:33.978 } 00:08:33.978 ] 00:08:33.978 } 00:08:33.978 ] 00:08:33.978 } 00:08:33.978 [2024-12-06 11:04:45.085200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.978 [2024-12-06 11:04:45.118455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.235 [2024-12-06 11:04:45.177295] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:34.235  [2024-12-06T11:04:45.382Z] Copying: 12/36 [MB] (average 571 MBps)[2024-12-06 11:04:45.214923] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:34.492 00:08:34.492 00:08:34.492 00:08:34.492 real 0m0.500s 00:08:34.492 user 0m0.292s 00:08:34.492 sys 0m0.119s 00:08:34.492 11:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.492 11:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.492 ************************************ 00:08:34.492 END TEST dd_sparse_file_to_bdev 00:08:34.492 ************************************ 00:08:34.492 11:04:45 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:34.492 11:04:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.492 11:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.492 11:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.492 ************************************ 00:08:34.492 START TEST dd_sparse_bdev_to_file 00:08:34.492 ************************************ 00:08:34.492 11:04:45 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:34.492 11:04:45 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:34.492 11:04:45 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:34.492 11:04:45 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:34.492 11:04:45 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:34.492 11:04:45 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:34.492 11:04:45 -- dd/sparse.sh@91 -- # gen_conf 00:08:34.492 11:04:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.492 11:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.492 [2024-12-06 11:04:45.499080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.492 [2024-12-06 11:04:45.499172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:08:34.492 { 00:08:34.492 "subsystems": [ 00:08:34.492 { 00:08:34.492 "subsystem": "bdev", 00:08:34.492 "config": [ 00:08:34.492 { 00:08:34.492 "params": { 00:08:34.492 "block_size": 4096, 00:08:34.492 "filename": "dd_sparse_aio_disk", 00:08:34.492 "name": "dd_aio" 00:08:34.492 }, 00:08:34.492 "method": "bdev_aio_create" 00:08:34.492 }, 00:08:34.492 { 00:08:34.492 "method": "bdev_wait_for_examine" 00:08:34.492 } 00:08:34.492 ] 00:08:34.492 } 00:08:34.492 ] 00:08:34.492 } 00:08:34.492 [2024-12-06 11:04:45.637705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.750 [2024-12-06 11:04:45.669449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.750  [2024-12-06T11:04:46.154Z] Copying: 12/36 [MB] (average 1333 MBps) 00:08:35.007 00:08:35.007 11:04:45 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:35.007 11:04:45 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:35.007 11:04:45 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:35.007 11:04:45 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:35.007 11:04:45 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:35.007 11:04:45 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:35.007 11:04:45 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:35.007 11:04:45 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:35.007 11:04:45 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:35.007 11:04:45 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:35.007 00:08:35.007 real 0m0.483s 00:08:35.007 user 0m0.277s 00:08:35.007 sys 0m0.134s 00:08:35.007 11:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.007 11:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:35.007 ************************************ 00:08:35.007 END TEST dd_sparse_bdev_to_file 00:08:35.007 ************************************ 00:08:35.007 11:04:45 -- dd/sparse.sh@1 -- # cleanup 00:08:35.007 11:04:45 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:35.007 11:04:45 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:35.007 11:04:45 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:35.007 11:04:45 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:35.007 00:08:35.007 real 0m1.869s 00:08:35.007 user 0m1.016s 00:08:35.007 sys 0m0.593s 00:08:35.007 11:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.007 11:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:35.007 ************************************ 00:08:35.007 END TEST spdk_dd_sparse 00:08:35.007 ************************************ 00:08:35.007 11:04:46 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:35.007 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.007 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.007 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.007 ************************************ 00:08:35.007 START TEST spdk_dd_negative 00:08:35.007 ************************************ 00:08:35.007 11:04:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:35.007 * Looking for test storage... 00:08:35.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:35.007 11:04:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:35.007 11:04:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:35.007 11:04:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:35.265 11:04:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:35.265 11:04:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:35.265 11:04:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:35.265 11:04:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:35.265 11:04:46 -- scripts/common.sh@335 -- # IFS=.-: 00:08:35.265 11:04:46 -- scripts/common.sh@335 -- # read -ra ver1 00:08:35.265 11:04:46 -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.265 11:04:46 -- scripts/common.sh@336 -- # read -ra ver2 00:08:35.265 11:04:46 -- scripts/common.sh@337 -- # local 'op=<' 00:08:35.265 11:04:46 -- scripts/common.sh@339 -- # ver1_l=2 00:08:35.265 11:04:46 -- scripts/common.sh@340 -- # ver2_l=1 00:08:35.265 11:04:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:35.265 11:04:46 -- scripts/common.sh@343 -- # case "$op" in 00:08:35.265 11:04:46 -- scripts/common.sh@344 -- # : 1 00:08:35.265 11:04:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:35.265 11:04:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.265 11:04:46 -- scripts/common.sh@364 -- # decimal 1 00:08:35.265 11:04:46 -- scripts/common.sh@352 -- # local d=1 00:08:35.265 11:04:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.265 11:04:46 -- scripts/common.sh@354 -- # echo 1 00:08:35.265 11:04:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:35.265 11:04:46 -- scripts/common.sh@365 -- # decimal 2 00:08:35.265 11:04:46 -- scripts/common.sh@352 -- # local d=2 00:08:35.266 11:04:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.266 11:04:46 -- scripts/common.sh@354 -- # echo 2 00:08:35.266 11:04:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:35.266 11:04:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:35.266 11:04:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:35.266 11:04:46 -- scripts/common.sh@367 -- # return 0 00:08:35.266 11:04:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.266 11:04:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:35.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.266 --rc genhtml_branch_coverage=1 00:08:35.266 --rc genhtml_function_coverage=1 00:08:35.266 --rc genhtml_legend=1 00:08:35.266 --rc geninfo_all_blocks=1 00:08:35.266 --rc geninfo_unexecuted_blocks=1 00:08:35.266 00:08:35.266 ' 00:08:35.266 11:04:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:35.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.266 --rc genhtml_branch_coverage=1 00:08:35.266 --rc genhtml_function_coverage=1 00:08:35.266 --rc genhtml_legend=1 00:08:35.266 --rc geninfo_all_blocks=1 00:08:35.266 --rc geninfo_unexecuted_blocks=1 00:08:35.266 00:08:35.266 ' 00:08:35.266 11:04:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:35.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.266 --rc genhtml_branch_coverage=1 00:08:35.266 --rc genhtml_function_coverage=1 00:08:35.266 --rc genhtml_legend=1 00:08:35.266 --rc geninfo_all_blocks=1 00:08:35.266 --rc geninfo_unexecuted_blocks=1 00:08:35.266 00:08:35.266 ' 00:08:35.266 11:04:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:35.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.266 --rc genhtml_branch_coverage=1 00:08:35.266 --rc genhtml_function_coverage=1 00:08:35.266 --rc genhtml_legend=1 00:08:35.266 --rc geninfo_all_blocks=1 00:08:35.266 --rc geninfo_unexecuted_blocks=1 00:08:35.266 00:08:35.266 ' 00:08:35.266 11:04:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.266 11:04:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.266 11:04:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.266 11:04:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.266 11:04:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.266 11:04:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.266 11:04:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.266 11:04:46 -- paths/export.sh@5 -- # export PATH 00:08:35.266 11:04:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.266 11:04:46 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.266 11:04:46 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.266 11:04:46 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.266 11:04:46 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.266 11:04:46 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:35.266 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.266 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.266 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.266 ************************************ 00:08:35.266 START TEST dd_invalid_arguments 00:08:35.266 ************************************ 00:08:35.266 11:04:46 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:35.266 11:04:46 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:35.266 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.266 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:35.266 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.266 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.266 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.266 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.266 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.266 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.266 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.266 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.266 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:35.266 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:35.266 options: 00:08:35.266 -c, --config JSON config file (default none) 00:08:35.266 --json JSON config file (default none) 00:08:35.266 --json-ignore-init-errors 00:08:35.266 don't exit on invalid config entry 00:08:35.266 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:35.266 -g, --single-file-segments 00:08:35.266 force creating just one hugetlbfs file 00:08:35.266 -h, --help show this usage 00:08:35.266 -i, --shm-id shared memory ID (optional) 00:08:35.266 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:35.266 --lcores lcore to CPU mapping list. The list is in the format: 00:08:35.266 [<,lcores[@CPUs]>...] 00:08:35.266 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:35.266 Within the group, '-' is used for range separator, 00:08:35.266 ',' is used for single number separator. 00:08:35.266 '( )' can be omitted for single element group, 00:08:35.266 '@' can be omitted if cpus and lcores have the same value 00:08:35.266 -n, --mem-channels channel number of memory channels used for DPDK 00:08:35.266 -p, --main-core main (primary) core for DPDK 00:08:35.266 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:35.266 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:35.266 --disable-cpumask-locks Disable CPU core lock files. 00:08:35.266 --silence-noticelog disable notice level logging to stderr 00:08:35.266 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:35.266 -u, --no-pci disable PCI access 00:08:35.266 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:35.266 --max-delay maximum reactor delay (in microseconds) 00:08:35.266 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:35.266 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:35.266 -R, --huge-unlink unlink huge files after initialization 00:08:35.266 -v, --version print SPDK version 00:08:35.266 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:35.266 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:35.266 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:35.266 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:35.266 Tracepoints vary in size and can use more than one trace entry. 00:08:35.266 --rpcs-allowed comma-separated list of permitted RPCS 00:08:35.266 --env-context Opaque context for use of the env implementation 00:08:35.266 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:35.266 --no-huge run without using hugepages 00:08:35.266 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:35.266 -e, --tpoint-group [:] 00:08:35.266 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:35.266 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:35.266 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:35.266 [2024-12-06 11:04:46.297924] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:35.266 can be combined (e.g. thread,bdev:0x1). 00:08:35.266 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:35.267 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:35.267 [--------- DD Options ---------] 00:08:35.267 --if Input file. Must specify either --if or --ib. 00:08:35.267 --ib Input bdev. Must specifier either --if or --ib 00:08:35.267 --of Output file. Must specify either --of or --ob. 00:08:35.267 --ob Output bdev. Must specify either --of or --ob. 00:08:35.267 --iflag Input file flags. 00:08:35.267 --oflag Output file flags. 00:08:35.267 --bs I/O unit size (default: 4096) 00:08:35.267 --qd Queue depth (default: 2) 00:08:35.267 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:35.267 --skip Skip this many I/O units at start of input. (default: 0) 00:08:35.267 --seek Skip this many I/O units at start of output. (default: 0) 00:08:35.267 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:35.267 --sparse Enable hole skipping in input target 00:08:35.267 Available iflag and oflag values: 00:08:35.267 append - append mode 00:08:35.267 direct - use direct I/O for data 00:08:35.267 directory - fail unless a directory 00:08:35.267 dsync - use synchronized I/O for data 00:08:35.267 noatime - do not update access time 00:08:35.267 noctty - do not assign controlling terminal from file 00:08:35.267 nofollow - do not follow symlinks 00:08:35.267 nonblock - use non-blocking I/O 00:08:35.267 sync - use synchronized I/O for data and metadata 00:08:35.267 11:04:46 -- common/autotest_common.sh@653 -- # es=2 00:08:35.267 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.267 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.267 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.267 00:08:35.267 real 0m0.065s 00:08:35.267 user 0m0.036s 00:08:35.267 sys 0m0.028s 00:08:35.267 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.267 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 ************************************ 00:08:35.267 END TEST dd_invalid_arguments 00:08:35.267 ************************************ 00:08:35.267 11:04:46 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:35.267 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.267 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.267 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 ************************************ 00:08:35.267 START TEST dd_double_input 00:08:35.267 ************************************ 00:08:35.267 11:04:46 -- common/autotest_common.sh@1114 -- # double_input 00:08:35.267 11:04:46 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:35.267 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.267 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:35.267 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.267 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.267 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.267 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.267 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.267 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.267 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.267 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.267 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:35.267 [2024-12-06 11:04:46.398627] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:35.530 11:04:46 -- common/autotest_common.sh@653 -- # es=22 00:08:35.530 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.530 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.530 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.530 00:08:35.530 real 0m0.052s 00:08:35.530 user 0m0.030s 00:08:35.530 sys 0m0.022s 00:08:35.530 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.530 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.530 ************************************ 00:08:35.530 END TEST dd_double_input 00:08:35.530 ************************************ 00:08:35.530 11:04:46 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:35.530 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.530 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.530 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.530 ************************************ 00:08:35.530 START TEST dd_double_output 00:08:35.530 ************************************ 00:08:35.530 11:04:46 -- common/autotest_common.sh@1114 -- # double_output 00:08:35.530 11:04:46 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:35.530 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.530 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:35.530 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.531 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:35.531 [2024-12-06 11:04:46.500893] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:35.531 11:04:46 -- common/autotest_common.sh@653 -- # es=22 00:08:35.531 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.531 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.531 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.531 00:08:35.531 real 0m0.052s 00:08:35.531 user 0m0.036s 00:08:35.531 sys 0m0.015s 00:08:35.531 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.531 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.531 ************************************ 00:08:35.531 END TEST dd_double_output 00:08:35.531 ************************************ 00:08:35.531 11:04:46 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:35.531 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.531 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.531 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.531 ************************************ 00:08:35.531 START TEST dd_no_input 00:08:35.531 ************************************ 00:08:35.531 11:04:46 -- common/autotest_common.sh@1114 -- # no_input 00:08:35.531 11:04:46 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:35.531 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.531 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:35.531 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.531 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.531 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:35.531 [2024-12-06 11:04:46.620587] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:35.531 11:04:46 -- common/autotest_common.sh@653 -- # es=22 00:08:35.531 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.531 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.531 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.531 00:08:35.532 real 0m0.070s 00:08:35.532 user 0m0.050s 00:08:35.532 sys 0m0.019s 00:08:35.532 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.532 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.532 ************************************ 00:08:35.532 END TEST dd_no_input 00:08:35.532 ************************************ 00:08:35.532 11:04:46 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:35.532 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.532 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.532 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.790 ************************************ 00:08:35.790 START TEST dd_no_output 00:08:35.790 ************************************ 00:08:35.790 11:04:46 -- common/autotest_common.sh@1114 -- # no_output 00:08:35.790 11:04:46 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.790 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.791 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.791 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.791 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.791 [2024-12-06 11:04:46.732287] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:35.791 11:04:46 -- common/autotest_common.sh@653 -- # es=22 00:08:35.791 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.791 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.791 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.791 00:08:35.791 real 0m0.065s 00:08:35.791 user 0m0.039s 00:08:35.791 sys 0m0.023s 00:08:35.791 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.791 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.791 ************************************ 00:08:35.791 END TEST dd_no_output 00:08:35.791 ************************************ 00:08:35.791 11:04:46 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:35.791 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.791 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.791 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.791 ************************************ 00:08:35.791 START TEST dd_wrong_blocksize 00:08:35.791 ************************************ 00:08:35.791 11:04:46 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:35.791 11:04:46 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:35.791 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.791 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:35.791 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.791 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:35.791 [2024-12-06 11:04:46.855856] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:35.791 11:04:46 -- common/autotest_common.sh@653 -- # es=22 00:08:35.791 11:04:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.791 11:04:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.791 11:04:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.791 00:08:35.791 real 0m0.070s 00:08:35.791 user 0m0.045s 00:08:35.791 sys 0m0.024s 00:08:35.791 11:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.791 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.791 ************************************ 00:08:35.791 END TEST dd_wrong_blocksize 00:08:35.791 ************************************ 00:08:35.791 11:04:46 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:35.791 11:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.791 11:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.791 11:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.791 ************************************ 00:08:35.791 START TEST dd_smaller_blocksize 00:08:35.791 ************************************ 00:08:35.791 11:04:46 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:35.791 11:04:46 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:35.791 11:04:46 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.791 11:04:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:35.791 11:04:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.791 11:04:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.791 11:04:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:36.052 [2024-12-06 11:04:46.972262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.052 [2024-12-06 11:04:46.972385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71607 ] 00:08:36.052 [2024-12-06 11:04:47.112343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.052 [2024-12-06 11:04:47.152486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.310 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:36.310 [2024-12-06 11:04:47.203573] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:36.310 [2024-12-06 11:04:47.203610] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.310 [2024-12-06 11:04:47.270168] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:36.310 11:04:47 -- common/autotest_common.sh@653 -- # es=244 00:08:36.310 11:04:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.310 11:04:47 -- common/autotest_common.sh@662 -- # es=116 00:08:36.310 11:04:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.310 11:04:47 -- common/autotest_common.sh@670 -- # es=1 00:08:36.310 11:04:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.310 00:08:36.310 real 0m0.423s 00:08:36.310 user 0m0.219s 00:08:36.310 sys 0m0.099s 00:08:36.310 11:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.310 ************************************ 00:08:36.310 END TEST dd_smaller_blocksize 00:08:36.310 ************************************ 00:08:36.310 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.310 11:04:47 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:36.310 11:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.310 11:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.310 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.310 ************************************ 00:08:36.310 START TEST dd_invalid_count 00:08:36.311 ************************************ 00:08:36.311 11:04:47 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:36.311 11:04:47 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:36.311 11:04:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.311 11:04:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:36.311 11:04:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.311 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.311 11:04:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.311 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.311 11:04:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.311 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.311 11:04:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.311 11:04:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.311 11:04:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:36.311 [2024-12-06 11:04:47.446524] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:36.569 11:04:47 -- common/autotest_common.sh@653 -- # es=22 00:08:36.569 11:04:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.569 11:04:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.569 11:04:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.569 00:08:36.569 real 0m0.067s 00:08:36.569 user 0m0.040s 00:08:36.569 sys 0m0.025s 00:08:36.569 11:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.569 ************************************ 00:08:36.569 END TEST dd_invalid_count 00:08:36.569 ************************************ 00:08:36.569 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.569 11:04:47 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:36.569 11:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.569 11:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.569 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.569 ************************************ 00:08:36.569 START TEST dd_invalid_oflag 00:08:36.569 ************************************ 00:08:36.569 11:04:47 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:36.569 11:04:47 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.569 11:04:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.569 11:04:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.569 11:04:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.569 11:04:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:36.569 [2024-12-06 11:04:47.561596] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:36.569 11:04:47 -- common/autotest_common.sh@653 -- # es=22 00:08:36.569 11:04:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.569 11:04:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.569 11:04:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.569 00:08:36.569 real 0m0.066s 00:08:36.569 user 0m0.037s 00:08:36.569 sys 0m0.028s 00:08:36.569 11:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.569 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.569 ************************************ 00:08:36.569 END TEST dd_invalid_oflag 00:08:36.569 ************************************ 00:08:36.569 11:04:47 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:36.569 11:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.569 11:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.569 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.569 ************************************ 00:08:36.569 START TEST dd_invalid_iflag 00:08:36.569 ************************************ 00:08:36.569 11:04:47 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:36.569 11:04:47 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:36.569 11:04:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.569 11:04:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.570 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.570 11:04:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.570 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.570 11:04:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.570 11:04:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.570 11:04:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:36.570 [2024-12-06 11:04:47.679785] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:36.570 11:04:47 -- common/autotest_common.sh@653 -- # es=22 00:08:36.570 11:04:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.570 11:04:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.570 11:04:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.570 00:08:36.570 real 0m0.069s 00:08:36.570 user 0m0.042s 00:08:36.570 sys 0m0.025s 00:08:36.570 11:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.570 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.570 ************************************ 00:08:36.570 END TEST dd_invalid_iflag 00:08:36.570 ************************************ 00:08:36.828 11:04:47 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:36.828 11:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.828 11:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.828 11:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.828 ************************************ 00:08:36.828 START TEST dd_unknown_flag 00:08:36.828 ************************************ 00:08:36.828 11:04:47 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:36.828 11:04:47 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:36.828 11:04:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.828 11:04:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:36.828 11:04:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.828 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.828 11:04:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.828 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.828 11:04:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.828 11:04:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.828 11:04:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.828 11:04:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.828 11:04:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:36.828 [2024-12-06 11:04:47.801515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.828 [2024-12-06 11:04:47.801624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71699 ] 00:08:36.828 [2024-12-06 11:04:47.941131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.087 [2024-12-06 11:04:47.983681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.087 [2024-12-06 11:04:48.034281] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:37.087 [2024-12-06 11:04:48.034358] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:37.087 [2024-12-06 11:04:48.034373] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:37.087 [2024-12-06 11:04:48.034388] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.087 [2024-12-06 11:04:48.100318] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:37.087 11:04:48 -- common/autotest_common.sh@653 -- # es=236 00:08:37.087 11:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.087 11:04:48 -- common/autotest_common.sh@662 -- # es=108 00:08:37.087 11:04:48 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:37.087 11:04:48 -- common/autotest_common.sh@670 -- # es=1 00:08:37.087 11:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.087 00:08:37.087 real 0m0.424s 00:08:37.087 user 0m0.215s 00:08:37.087 sys 0m0.103s 00:08:37.087 11:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.087 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.087 ************************************ 00:08:37.087 END TEST dd_unknown_flag 00:08:37.087 ************************************ 00:08:37.087 11:04:48 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:37.087 11:04:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.087 11:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.087 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.087 ************************************ 00:08:37.087 START TEST dd_invalid_json 00:08:37.087 ************************************ 00:08:37.087 11:04:48 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:37.087 11:04:48 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:37.087 11:04:48 -- common/autotest_common.sh@650 -- # local es=0 00:08:37.087 11:04:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:37.087 11:04:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.087 11:04:48 -- dd/negative_dd.sh@95 -- # : 00:08:37.346 11:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.346 11:04:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.346 11:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.346 11:04:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.346 11:04:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.346 11:04:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.346 11:04:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:37.346 11:04:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:37.346 [2024-12-06 11:04:48.282660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.346 [2024-12-06 11:04:48.282758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:08:37.346 [2024-12-06 11:04:48.422370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.346 [2024-12-06 11:04:48.463533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.346 [2024-12-06 11:04:48.463720] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:37.346 [2024-12-06 11:04:48.463745] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.346 [2024-12-06 11:04:48.463793] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:37.606 11:04:48 -- common/autotest_common.sh@653 -- # es=234 00:08:37.606 11:04:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.606 11:04:48 -- common/autotest_common.sh@662 -- # es=106 00:08:37.606 11:04:48 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:37.606 11:04:48 -- common/autotest_common.sh@670 -- # es=1 00:08:37.606 11:04:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.606 00:08:37.606 real 0m0.304s 00:08:37.606 user 0m0.139s 00:08:37.606 sys 0m0.063s 00:08:37.606 11:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.606 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 ************************************ 00:08:37.606 END TEST dd_invalid_json 00:08:37.606 ************************************ 00:08:37.606 00:08:37.606 real 0m2.532s 00:08:37.606 user 0m1.244s 00:08:37.606 sys 0m0.920s 00:08:37.606 11:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.606 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 ************************************ 00:08:37.606 END TEST spdk_dd_negative 00:08:37.606 ************************************ 00:08:37.606 00:08:37.606 real 1m3.303s 00:08:37.606 user 0m38.026s 00:08:37.606 sys 0m16.081s 00:08:37.606 11:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.606 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 ************************************ 00:08:37.606 END TEST spdk_dd 00:08:37.606 ************************************ 00:08:37.606 11:04:48 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:37.606 11:04:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.606 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 11:04:48 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:37.606 11:04:48 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:37.606 11:04:48 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.606 11:04:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.606 11:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.606 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.606 ************************************ 00:08:37.606 START TEST nvmf_tcp 00:08:37.606 ************************************ 00:08:37.606 11:04:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.865 * Looking for test storage... 00:08:37.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:37.865 11:04:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.865 11:04:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.865 11:04:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.865 11:04:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.865 11:04:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.865 11:04:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.865 11:04:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.865 11:04:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.865 11:04:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.865 11:04:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.865 11:04:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.865 11:04:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.865 11:04:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.865 11:04:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.865 11:04:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.865 11:04:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.866 11:04:48 -- scripts/common.sh@344 -- # : 1 00:08:37.866 11:04:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.866 11:04:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.866 11:04:48 -- scripts/common.sh@364 -- # decimal 1 00:08:37.866 11:04:48 -- scripts/common.sh@352 -- # local d=1 00:08:37.866 11:04:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.866 11:04:48 -- scripts/common.sh@354 -- # echo 1 00:08:37.866 11:04:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.866 11:04:48 -- scripts/common.sh@365 -- # decimal 2 00:08:37.866 11:04:48 -- scripts/common.sh@352 -- # local d=2 00:08:37.866 11:04:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.866 11:04:48 -- scripts/common.sh@354 -- # echo 2 00:08:37.866 11:04:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.866 11:04:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.866 11:04:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.866 11:04:48 -- scripts/common.sh@367 -- # return 0 00:08:37.866 11:04:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.866 11:04:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.866 --rc genhtml_branch_coverage=1 00:08:37.866 --rc genhtml_function_coverage=1 00:08:37.866 --rc genhtml_legend=1 00:08:37.866 --rc geninfo_all_blocks=1 00:08:37.866 --rc geninfo_unexecuted_blocks=1 00:08:37.866 00:08:37.866 ' 00:08:37.866 11:04:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.866 --rc genhtml_branch_coverage=1 00:08:37.866 --rc genhtml_function_coverage=1 00:08:37.866 --rc genhtml_legend=1 00:08:37.866 --rc geninfo_all_blocks=1 00:08:37.866 --rc geninfo_unexecuted_blocks=1 00:08:37.866 00:08:37.866 ' 00:08:37.866 11:04:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.866 --rc genhtml_branch_coverage=1 00:08:37.866 --rc genhtml_function_coverage=1 00:08:37.866 --rc genhtml_legend=1 00:08:37.866 --rc geninfo_all_blocks=1 00:08:37.866 --rc geninfo_unexecuted_blocks=1 00:08:37.866 00:08:37.866 ' 00:08:37.866 11:04:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.866 --rc genhtml_branch_coverage=1 00:08:37.866 --rc genhtml_function_coverage=1 00:08:37.866 --rc genhtml_legend=1 00:08:37.866 --rc geninfo_all_blocks=1 00:08:37.866 --rc geninfo_unexecuted_blocks=1 00:08:37.866 00:08:37.866 ' 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.866 11:04:48 -- nvmf/common.sh@7 -- # uname -s 00:08:37.866 11:04:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.866 11:04:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.866 11:04:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.866 11:04:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.866 11:04:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.866 11:04:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.866 11:04:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.866 11:04:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.866 11:04:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.866 11:04:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.866 11:04:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:37.866 11:04:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:37.866 11:04:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.866 11:04:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.866 11:04:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.866 11:04:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.866 11:04:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.866 11:04:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.866 11:04:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.866 11:04:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.866 11:04:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.866 11:04:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.866 11:04:48 -- paths/export.sh@5 -- # export PATH 00:08:37.866 11:04:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.866 11:04:48 -- nvmf/common.sh@46 -- # : 0 00:08:37.866 11:04:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.866 11:04:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.866 11:04:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.866 11:04:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.866 11:04:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.866 11:04:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.866 11:04:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.866 11:04:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:37.866 11:04:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.866 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:37.866 11:04:48 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.866 11:04:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.866 11:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.866 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.866 ************************************ 00:08:37.866 START TEST nvmf_host_management 00:08:37.866 ************************************ 00:08:37.866 11:04:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.126 * Looking for test storage... 00:08:38.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.126 11:04:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.126 11:04:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.126 11:04:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.126 11:04:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.126 11:04:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.126 11:04:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.126 11:04:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.126 11:04:49 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.126 11:04:49 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.126 11:04:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.126 11:04:49 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.126 11:04:49 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.126 11:04:49 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.126 11:04:49 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.126 11:04:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.126 11:04:49 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.126 11:04:49 -- scripts/common.sh@344 -- # : 1 00:08:38.126 11:04:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.126 11:04:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.126 11:04:49 -- scripts/common.sh@364 -- # decimal 1 00:08:38.126 11:04:49 -- scripts/common.sh@352 -- # local d=1 00:08:38.126 11:04:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.126 11:04:49 -- scripts/common.sh@354 -- # echo 1 00:08:38.126 11:04:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.126 11:04:49 -- scripts/common.sh@365 -- # decimal 2 00:08:38.126 11:04:49 -- scripts/common.sh@352 -- # local d=2 00:08:38.126 11:04:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.126 11:04:49 -- scripts/common.sh@354 -- # echo 2 00:08:38.126 11:04:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.126 11:04:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.126 11:04:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.126 11:04:49 -- scripts/common.sh@367 -- # return 0 00:08:38.126 11:04:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.126 11:04:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.126 --rc genhtml_branch_coverage=1 00:08:38.126 --rc genhtml_function_coverage=1 00:08:38.126 --rc genhtml_legend=1 00:08:38.126 --rc geninfo_all_blocks=1 00:08:38.126 --rc geninfo_unexecuted_blocks=1 00:08:38.126 00:08:38.126 ' 00:08:38.126 11:04:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.126 --rc genhtml_branch_coverage=1 00:08:38.126 --rc genhtml_function_coverage=1 00:08:38.126 --rc genhtml_legend=1 00:08:38.126 --rc geninfo_all_blocks=1 00:08:38.127 --rc geninfo_unexecuted_blocks=1 00:08:38.127 00:08:38.127 ' 00:08:38.127 11:04:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.127 --rc genhtml_branch_coverage=1 00:08:38.127 --rc genhtml_function_coverage=1 00:08:38.127 --rc genhtml_legend=1 00:08:38.127 --rc geninfo_all_blocks=1 00:08:38.127 --rc geninfo_unexecuted_blocks=1 00:08:38.127 00:08:38.127 ' 00:08:38.127 11:04:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.127 --rc genhtml_branch_coverage=1 00:08:38.127 --rc genhtml_function_coverage=1 00:08:38.127 --rc genhtml_legend=1 00:08:38.127 --rc geninfo_all_blocks=1 00:08:38.127 --rc geninfo_unexecuted_blocks=1 00:08:38.127 00:08:38.127 ' 00:08:38.127 11:04:49 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.127 11:04:49 -- nvmf/common.sh@7 -- # uname -s 00:08:38.127 11:04:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.127 11:04:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.127 11:04:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.127 11:04:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.127 11:04:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.127 11:04:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.127 11:04:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.127 11:04:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.127 11:04:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.127 11:04:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:38.127 11:04:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:38.127 11:04:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.127 11:04:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.127 11:04:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.127 11:04:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.127 11:04:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.127 11:04:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.127 11:04:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.127 11:04:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.127 11:04:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.127 11:04:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.127 11:04:49 -- paths/export.sh@5 -- # export PATH 00:08:38.127 11:04:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.127 11:04:49 -- nvmf/common.sh@46 -- # : 0 00:08:38.127 11:04:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.127 11:04:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.127 11:04:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.127 11:04:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.127 11:04:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.127 11:04:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.127 11:04:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.127 11:04:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.127 11:04:49 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.127 11:04:49 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.127 11:04:49 -- target/host_management.sh@104 -- # nvmftestinit 00:08:38.127 11:04:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.127 11:04:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.127 11:04:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.127 11:04:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.127 11:04:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.127 11:04:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.127 11:04:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.127 11:04:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.127 11:04:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:38.127 11:04:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:38.127 11:04:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.127 11:04:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.127 11:04:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.127 11:04:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:38.127 11:04:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.127 11:04:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.127 11:04:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.127 11:04:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.127 11:04:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.127 11:04:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.127 11:04:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.127 11:04:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.127 11:04:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:38.127 Cannot find device "nvmf_init_br" 00:08:38.127 11:04:49 -- nvmf/common.sh@153 -- # true 00:08:38.127 11:04:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:38.127 Cannot find device "nvmf_tgt_br" 00:08:38.127 11:04:49 -- nvmf/common.sh@154 -- # true 00:08:38.127 11:04:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.127 Cannot find device "nvmf_tgt_br2" 00:08:38.127 11:04:49 -- nvmf/common.sh@155 -- # true 00:08:38.127 11:04:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:38.127 Cannot find device "nvmf_init_br" 00:08:38.127 11:04:49 -- nvmf/common.sh@156 -- # true 00:08:38.127 11:04:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:38.387 Cannot find device "nvmf_tgt_br" 00:08:38.387 11:04:49 -- nvmf/common.sh@157 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:38.387 Cannot find device "nvmf_tgt_br2" 00:08:38.387 11:04:49 -- nvmf/common.sh@158 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:38.387 Cannot find device "nvmf_br" 00:08:38.387 11:04:49 -- nvmf/common.sh@159 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:38.387 Cannot find device "nvmf_init_if" 00:08:38.387 11:04:49 -- nvmf/common.sh@160 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.387 11:04:49 -- nvmf/common.sh@161 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.387 11:04:49 -- nvmf/common.sh@162 -- # true 00:08:38.387 11:04:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.387 11:04:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.387 11:04:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.387 11:04:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.387 11:04:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.387 11:04:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.387 11:04:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.387 11:04:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.387 11:04:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.387 11:04:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:38.387 11:04:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:38.387 11:04:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:38.387 11:04:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:38.387 11:04:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.387 11:04:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.387 11:04:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.387 11:04:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:38.387 11:04:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:38.387 11:04:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.387 11:04:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.387 11:04:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.670 11:04:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.670 11:04:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.670 11:04:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:38.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:38.670 00:08:38.670 --- 10.0.0.2 ping statistics --- 00:08:38.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.670 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:38.670 11:04:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:38.670 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.670 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:38.670 00:08:38.670 --- 10.0.0.3 ping statistics --- 00:08:38.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.670 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:38.670 11:04:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:38.670 00:08:38.670 --- 10.0.0.1 ping statistics --- 00:08:38.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.670 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:38.670 11:04:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.670 11:04:49 -- nvmf/common.sh@421 -- # return 0 00:08:38.670 11:04:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.670 11:04:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.670 11:04:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.670 11:04:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.670 11:04:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.670 11:04:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.670 11:04:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.670 11:04:49 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:38.670 11:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.670 11:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.670 11:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.670 ************************************ 00:08:38.670 START TEST nvmf_host_management 00:08:38.670 ************************************ 00:08:38.670 11:04:49 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:38.670 11:04:49 -- target/host_management.sh@69 -- # starttarget 00:08:38.670 11:04:49 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:38.670 11:04:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.670 11:04:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.670 11:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.670 11:04:49 -- nvmf/common.sh@469 -- # nvmfpid=72007 00:08:38.670 11:04:49 -- nvmf/common.sh@470 -- # waitforlisten 72007 00:08:38.670 11:04:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:38.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.670 11:04:49 -- common/autotest_common.sh@829 -- # '[' -z 72007 ']' 00:08:38.670 11:04:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.670 11:04:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.670 11:04:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.670 11:04:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.670 11:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.670 [2024-12-06 11:04:49.697042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.670 [2024-12-06 11:04:49.697141] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.939 [2024-12-06 11:04:49.842778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.939 [2024-12-06 11:04:49.886699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.939 [2024-12-06 11:04:49.886904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.939 [2024-12-06 11:04:49.886931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.939 [2024-12-06 11:04:49.886949] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.939 [2024-12-06 11:04:49.887128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.939 [2024-12-06 11:04:49.887288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.939 [2024-12-06 11:04:49.887353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.939 [2024-12-06 11:04:49.887364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.877 11:04:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.877 11:04:50 -- common/autotest_common.sh@862 -- # return 0 00:08:39.877 11:04:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.877 11:04:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 11:04:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.877 11:04:50 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.877 11:04:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 [2024-12-06 11:04:50.799726] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.877 11:04:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.877 11:04:50 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.877 11:04:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 11:04:50 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:39.877 11:04:50 -- target/host_management.sh@23 -- # cat 00:08:39.877 11:04:50 -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.877 11:04:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 Malloc0 00:08:39.877 [2024-12-06 11:04:50.876384] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.877 11:04:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.877 11:04:50 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.877 11:04:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.877 11:04:50 -- target/host_management.sh@73 -- # perfpid=72066 00:08:39.877 11:04:50 -- target/host_management.sh@74 -- # waitforlisten 72066 /var/tmp/bdevperf.sock 00:08:39.877 11:04:50 -- common/autotest_common.sh@829 -- # '[' -z 72066 ']' 00:08:39.877 11:04:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.877 11:04:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.877 11:04:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.877 11:04:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.877 11:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 11:04:50 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.877 11:04:50 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.877 11:04:50 -- nvmf/common.sh@520 -- # config=() 00:08:39.877 11:04:50 -- nvmf/common.sh@520 -- # local subsystem config 00:08:39.877 11:04:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:39.877 11:04:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:39.877 { 00:08:39.877 "params": { 00:08:39.877 "name": "Nvme$subsystem", 00:08:39.877 "trtype": "$TEST_TRANSPORT", 00:08:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.877 "adrfam": "ipv4", 00:08:39.877 "trsvcid": "$NVMF_PORT", 00:08:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.877 "hdgst": ${hdgst:-false}, 00:08:39.877 "ddgst": ${ddgst:-false} 00:08:39.877 }, 00:08:39.877 "method": "bdev_nvme_attach_controller" 00:08:39.877 } 00:08:39.877 EOF 00:08:39.877 )") 00:08:39.877 11:04:50 -- nvmf/common.sh@542 -- # cat 00:08:39.877 11:04:50 -- nvmf/common.sh@544 -- # jq . 00:08:39.877 11:04:50 -- nvmf/common.sh@545 -- # IFS=, 00:08:39.877 11:04:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:39.877 "params": { 00:08:39.877 "name": "Nvme0", 00:08:39.877 "trtype": "tcp", 00:08:39.877 "traddr": "10.0.0.2", 00:08:39.877 "adrfam": "ipv4", 00:08:39.877 "trsvcid": "4420", 00:08:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.877 "hdgst": false, 00:08:39.877 "ddgst": false 00:08:39.877 }, 00:08:39.877 "method": "bdev_nvme_attach_controller" 00:08:39.877 }' 00:08:39.877 [2024-12-06 11:04:50.975398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.877 [2024-12-06 11:04:50.975483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72066 ] 00:08:40.136 [2024-12-06 11:04:51.118410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.136 [2024-12-06 11:04:51.157258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.395 Running I/O for 10 seconds... 00:08:40.965 11:04:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.965 11:04:51 -- common/autotest_common.sh@862 -- # return 0 00:08:40.965 11:04:51 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:40.965 11:04:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.965 11:04:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.965 11:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.965 11:04:52 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.965 11:04:52 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.965 11:04:52 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.965 11:04:52 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.965 11:04:52 -- target/host_management.sh@52 -- # local ret=1 00:08:40.965 11:04:52 -- target/host_management.sh@53 -- # local i 00:08:40.965 11:04:52 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.965 11:04:52 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.965 11:04:52 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.965 11:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.965 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.965 11:04:52 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.965 11:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.965 11:04:52 -- target/host_management.sh@55 -- # read_io_count=2042 00:08:40.965 11:04:52 -- target/host_management.sh@58 -- # '[' 2042 -ge 100 ']' 00:08:40.965 11:04:52 -- target/host_management.sh@59 -- # ret=0 00:08:40.965 11:04:52 -- target/host_management.sh@60 -- # break 00:08:40.965 11:04:52 -- target/host_management.sh@64 -- # return 0 00:08:40.965 11:04:52 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.965 11:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.965 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.965 [2024-12-06 11:04:52.065565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.965 [2024-12-06 11:04:52.065870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.065997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3330 is same with the state(5) to be set 00:08:40.966 [2024-12-06 11:04:52.066157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.966 [2024-12-06 11:04:52.066834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.966 [2024-12-06 11:04:52.066845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.066866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.066902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.066950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.066969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.066989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.066997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.967 [2024-12-06 11:04:52.067654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.967 [2024-12-06 11:04:52.067666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cf460 is same with the state(5) to be set 00:08:40.967 [2024-12-06 11:04:52.067710] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10cf460 was disconnected and freed. reset controller. 00:08:40.967 [2024-12-06 11:04:52.068879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:40.967 task offset: 18176 on job bdev=Nvme0n1 fails 00:08:40.967 00:08:40.967 Latency(us) 00:08:40.967 [2024-12-06T11:04:52.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.967 [2024-12-06T11:04:52.114Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.967 [2024-12-06T11:04:52.114Z] Job: Nvme0n1 ended in about 0.78 seconds with error 00:08:40.967 Verification LBA range: start 0x0 length 0x400 00:08:40.967 Nvme0n1 : 0.78 2797.90 174.87 82.44 0.00 21860.52 8102.63 31695.59 00:08:40.967 [2024-12-06T11:04:52.114Z] =================================================================================================================== 00:08:40.967 [2024-12-06T11:04:52.114Z] Total : 2797.90 174.87 82.44 0.00 21860.52 8102.63 31695.59 00:08:40.967 [2024-12-06 11:04:52.070992] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.968 [2024-12-06 11:04:52.071015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0da0 (9): Bad file descriptor 00:08:40.968 11:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.968 11:04:52 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.968 11:04:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.968 11:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.968 [2024-12-06 11:04:52.076896] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:40.968 [2024-12-06 11:04:52.077006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:40.968 [2024-12-06 11:04:52.077030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.968 [2024-12-06 11:04:52.077044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:40.968 [2024-12-06 11:04:52.077054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:40.968 [2024-12-06 11:04:52.077062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:40.968 [2024-12-06 11:04:52.077070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10d0da0 00:08:40.968 [2024-12-06 11:04:52.077100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d0da0 (9): Bad file descriptor 00:08:40.968 [2024-12-06 11:04:52.077118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:40.968 [2024-12-06 11:04:52.077126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:40.968 [2024-12-06 11:04:52.077136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:40.968 [2024-12-06 11:04:52.077152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:40.968 11:04:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.968 11:04:52 -- target/host_management.sh@87 -- # sleep 1 00:08:42.345 11:04:53 -- target/host_management.sh@91 -- # kill -9 72066 00:08:42.345 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72066) - No such process 00:08:42.345 11:04:53 -- target/host_management.sh@91 -- # true 00:08:42.345 11:04:53 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:42.345 11:04:53 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:42.345 11:04:53 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:42.345 11:04:53 -- nvmf/common.sh@520 -- # config=() 00:08:42.345 11:04:53 -- nvmf/common.sh@520 -- # local subsystem config 00:08:42.345 11:04:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:42.345 11:04:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:42.345 { 00:08:42.345 "params": { 00:08:42.345 "name": "Nvme$subsystem", 00:08:42.345 "trtype": "$TEST_TRANSPORT", 00:08:42.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.345 "adrfam": "ipv4", 00:08:42.345 "trsvcid": "$NVMF_PORT", 00:08:42.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.345 "hdgst": ${hdgst:-false}, 00:08:42.345 "ddgst": ${ddgst:-false} 00:08:42.345 }, 00:08:42.345 "method": "bdev_nvme_attach_controller" 00:08:42.345 } 00:08:42.345 EOF 00:08:42.345 )") 00:08:42.345 11:04:53 -- nvmf/common.sh@542 -- # cat 00:08:42.345 11:04:53 -- nvmf/common.sh@544 -- # jq . 00:08:42.345 11:04:53 -- nvmf/common.sh@545 -- # IFS=, 00:08:42.345 11:04:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:42.345 "params": { 00:08:42.345 "name": "Nvme0", 00:08:42.346 "trtype": "tcp", 00:08:42.346 "traddr": "10.0.0.2", 00:08:42.346 "adrfam": "ipv4", 00:08:42.346 "trsvcid": "4420", 00:08:42.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:42.346 "hdgst": false, 00:08:42.346 "ddgst": false 00:08:42.346 }, 00:08:42.346 "method": "bdev_nvme_attach_controller" 00:08:42.346 }' 00:08:42.346 [2024-12-06 11:04:53.140826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.346 [2024-12-06 11:04:53.140928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72104 ] 00:08:42.346 [2024-12-06 11:04:53.285844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.346 [2024-12-06 11:04:53.320757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.346 Running I/O for 1 seconds... 00:08:43.723 00:08:43.723 Latency(us) 00:08:43.723 [2024-12-06T11:04:54.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.723 [2024-12-06T11:04:54.870Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.723 Verification LBA range: start 0x0 length 0x400 00:08:43.723 Nvme0n1 : 1.01 2977.19 186.07 0.00 0.00 21173.14 945.80 27048.49 00:08:43.723 [2024-12-06T11:04:54.870Z] =================================================================================================================== 00:08:43.723 [2024-12-06T11:04:54.870Z] Total : 2977.19 186.07 0.00 0.00 21173.14 945.80 27048.49 00:08:43.723 11:04:54 -- target/host_management.sh@101 -- # stoptarget 00:08:43.723 11:04:54 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:43.723 11:04:54 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:43.723 11:04:54 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:43.723 11:04:54 -- target/host_management.sh@40 -- # nvmftestfini 00:08:43.723 11:04:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:43.723 11:04:54 -- nvmf/common.sh@116 -- # sync 00:08:43.723 11:04:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:43.723 11:04:54 -- nvmf/common.sh@119 -- # set +e 00:08:43.723 11:04:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:43.723 11:04:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:43.723 rmmod nvme_tcp 00:08:43.723 rmmod nvme_fabrics 00:08:43.723 rmmod nvme_keyring 00:08:43.723 11:04:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:43.723 11:04:54 -- nvmf/common.sh@123 -- # set -e 00:08:43.723 11:04:54 -- nvmf/common.sh@124 -- # return 0 00:08:43.723 11:04:54 -- nvmf/common.sh@477 -- # '[' -n 72007 ']' 00:08:43.723 11:04:54 -- nvmf/common.sh@478 -- # killprocess 72007 00:08:43.723 11:04:54 -- common/autotest_common.sh@936 -- # '[' -z 72007 ']' 00:08:43.723 11:04:54 -- common/autotest_common.sh@940 -- # kill -0 72007 00:08:43.723 11:04:54 -- common/autotest_common.sh@941 -- # uname 00:08:43.723 11:04:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:43.723 11:04:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72007 00:08:43.723 killing process with pid 72007 00:08:43.723 11:04:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:43.723 11:04:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:43.723 11:04:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72007' 00:08:43.723 11:04:54 -- common/autotest_common.sh@955 -- # kill 72007 00:08:43.723 11:04:54 -- common/autotest_common.sh@960 -- # wait 72007 00:08:43.982 [2024-12-06 11:04:54.896803] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:43.982 11:04:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:43.982 11:04:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:43.982 11:04:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:43.982 11:04:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.982 11:04:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:43.982 11:04:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.982 11:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.982 11:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.982 11:04:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:43.982 00:08:43.982 real 0m5.324s 00:08:43.982 user 0m22.637s 00:08:43.982 sys 0m1.183s 00:08:43.982 11:04:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.982 11:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:43.982 ************************************ 00:08:43.982 END TEST nvmf_host_management 00:08:43.982 ************************************ 00:08:43.982 11:04:55 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:43.982 00:08:43.982 real 0m6.055s 00:08:43.982 user 0m22.891s 00:08:43.982 sys 0m1.440s 00:08:43.982 11:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.982 11:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:43.982 ************************************ 00:08:43.982 END TEST nvmf_host_management 00:08:43.982 ************************************ 00:08:43.982 11:04:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.982 11:04:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.982 11:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.982 11:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:43.982 ************************************ 00:08:43.982 START TEST nvmf_lvol 00:08:43.982 ************************************ 00:08:43.982 11:04:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:44.241 * Looking for test storage... 00:08:44.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.241 11:04:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:44.241 11:04:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:44.241 11:04:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:44.241 11:04:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:44.241 11:04:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:44.241 11:04:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:44.241 11:04:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:44.241 11:04:55 -- scripts/common.sh@335 -- # IFS=.-: 00:08:44.241 11:04:55 -- scripts/common.sh@335 -- # read -ra ver1 00:08:44.241 11:04:55 -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.241 11:04:55 -- scripts/common.sh@336 -- # read -ra ver2 00:08:44.241 11:04:55 -- scripts/common.sh@337 -- # local 'op=<' 00:08:44.241 11:04:55 -- scripts/common.sh@339 -- # ver1_l=2 00:08:44.241 11:04:55 -- scripts/common.sh@340 -- # ver2_l=1 00:08:44.241 11:04:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:44.241 11:04:55 -- scripts/common.sh@343 -- # case "$op" in 00:08:44.241 11:04:55 -- scripts/common.sh@344 -- # : 1 00:08:44.241 11:04:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:44.241 11:04:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.241 11:04:55 -- scripts/common.sh@364 -- # decimal 1 00:08:44.241 11:04:55 -- scripts/common.sh@352 -- # local d=1 00:08:44.241 11:04:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.241 11:04:55 -- scripts/common.sh@354 -- # echo 1 00:08:44.241 11:04:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:44.241 11:04:55 -- scripts/common.sh@365 -- # decimal 2 00:08:44.241 11:04:55 -- scripts/common.sh@352 -- # local d=2 00:08:44.241 11:04:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.241 11:04:55 -- scripts/common.sh@354 -- # echo 2 00:08:44.241 11:04:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:44.241 11:04:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:44.241 11:04:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:44.241 11:04:55 -- scripts/common.sh@367 -- # return 0 00:08:44.241 11:04:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.241 11:04:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.241 --rc genhtml_branch_coverage=1 00:08:44.241 --rc genhtml_function_coverage=1 00:08:44.241 --rc genhtml_legend=1 00:08:44.241 --rc geninfo_all_blocks=1 00:08:44.241 --rc geninfo_unexecuted_blocks=1 00:08:44.241 00:08:44.241 ' 00:08:44.241 11:04:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.241 --rc genhtml_branch_coverage=1 00:08:44.241 --rc genhtml_function_coverage=1 00:08:44.241 --rc genhtml_legend=1 00:08:44.241 --rc geninfo_all_blocks=1 00:08:44.241 --rc geninfo_unexecuted_blocks=1 00:08:44.241 00:08:44.241 ' 00:08:44.241 11:04:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.241 --rc genhtml_branch_coverage=1 00:08:44.241 --rc genhtml_function_coverage=1 00:08:44.241 --rc genhtml_legend=1 00:08:44.241 --rc geninfo_all_blocks=1 00:08:44.241 --rc geninfo_unexecuted_blocks=1 00:08:44.241 00:08:44.241 ' 00:08:44.241 11:04:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.241 --rc genhtml_branch_coverage=1 00:08:44.241 --rc genhtml_function_coverage=1 00:08:44.241 --rc genhtml_legend=1 00:08:44.241 --rc geninfo_all_blocks=1 00:08:44.241 --rc geninfo_unexecuted_blocks=1 00:08:44.241 00:08:44.241 ' 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.241 11:04:55 -- nvmf/common.sh@7 -- # uname -s 00:08:44.241 11:04:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.241 11:04:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.241 11:04:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.241 11:04:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.241 11:04:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.241 11:04:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.241 11:04:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.241 11:04:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.241 11:04:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.241 11:04:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:44.241 11:04:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:44.241 11:04:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.241 11:04:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.241 11:04:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.241 11:04:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.241 11:04:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.241 11:04:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.241 11:04:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.241 11:04:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.241 11:04:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.241 11:04:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.241 11:04:55 -- paths/export.sh@5 -- # export PATH 00:08:44.241 11:04:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.241 11:04:55 -- nvmf/common.sh@46 -- # : 0 00:08:44.241 11:04:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.241 11:04:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.241 11:04:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.241 11:04:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.241 11:04:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.241 11:04:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.241 11:04:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.241 11:04:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.241 11:04:55 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:44.241 11:04:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.241 11:04:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.241 11:04:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.241 11:04:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.241 11:04:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.241 11:04:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.241 11:04:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.241 11:04:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.241 11:04:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:44.241 11:04:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:44.241 11:04:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.241 11:04:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.241 11:04:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.241 11:04:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:44.241 11:04:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.241 11:04:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.241 11:04:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.241 11:04:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.241 11:04:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.241 11:04:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.241 11:04:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.241 11:04:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.241 11:04:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:44.242 11:04:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:44.242 Cannot find device "nvmf_tgt_br" 00:08:44.242 11:04:55 -- nvmf/common.sh@154 -- # true 00:08:44.242 11:04:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.242 Cannot find device "nvmf_tgt_br2" 00:08:44.242 11:04:55 -- nvmf/common.sh@155 -- # true 00:08:44.242 11:04:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:44.242 11:04:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:44.242 Cannot find device "nvmf_tgt_br" 00:08:44.242 11:04:55 -- nvmf/common.sh@157 -- # true 00:08:44.242 11:04:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:44.242 Cannot find device "nvmf_tgt_br2" 00:08:44.242 11:04:55 -- nvmf/common.sh@158 -- # true 00:08:44.242 11:04:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:44.242 11:04:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:44.500 11:04:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.500 11:04:55 -- nvmf/common.sh@161 -- # true 00:08:44.500 11:04:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.500 11:04:55 -- nvmf/common.sh@162 -- # true 00:08:44.500 11:04:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.500 11:04:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.500 11:04:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.500 11:04:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.500 11:04:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.500 11:04:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.500 11:04:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.500 11:04:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:44.500 11:04:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:44.500 11:04:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:44.500 11:04:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:44.500 11:04:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:44.500 11:04:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:44.500 11:04:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.500 11:04:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.500 11:04:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.500 11:04:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:44.501 11:04:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:44.501 11:04:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.501 11:04:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.501 11:04:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.501 11:04:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.501 11:04:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.501 11:04:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:44.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:44.501 00:08:44.501 --- 10.0.0.2 ping statistics --- 00:08:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.501 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:44.501 11:04:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:44.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:44.501 00:08:44.501 --- 10.0.0.3 ping statistics --- 00:08:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.501 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:44.501 11:04:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:44.501 00:08:44.501 --- 10.0.0.1 ping statistics --- 00:08:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.501 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:44.501 11:04:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.501 11:04:55 -- nvmf/common.sh@421 -- # return 0 00:08:44.501 11:04:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:44.501 11:04:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.501 11:04:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:44.501 11:04:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:44.501 11:04:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.501 11:04:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:44.501 11:04:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:44.501 11:04:55 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:44.501 11:04:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:44.501 11:04:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.501 11:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.501 11:04:55 -- nvmf/common.sh@469 -- # nvmfpid=72340 00:08:44.501 11:04:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:44.501 11:04:55 -- nvmf/common.sh@470 -- # waitforlisten 72340 00:08:44.501 11:04:55 -- common/autotest_common.sh@829 -- # '[' -z 72340 ']' 00:08:44.501 11:04:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.501 11:04:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.501 11:04:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.501 11:04:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.501 11:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.759 [2024-12-06 11:04:55.668542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.759 [2024-12-06 11:04:55.668691] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.759 [2024-12-06 11:04:55.812032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.759 [2024-12-06 11:04:55.850626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.759 [2024-12-06 11:04:55.850965] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.759 [2024-12-06 11:04:55.851093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.759 [2024-12-06 11:04:55.851290] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.759 [2024-12-06 11:04:55.851560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.759 [2024-12-06 11:04:55.851739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.759 [2024-12-06 11:04:55.851749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.694 11:04:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.694 11:04:56 -- common/autotest_common.sh@862 -- # return 0 00:08:45.694 11:04:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:45.694 11:04:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.694 11:04:56 -- common/autotest_common.sh@10 -- # set +x 00:08:45.694 11:04:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.694 11:04:56 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.952 [2024-12-06 11:04:56.963539] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.952 11:04:56 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.211 11:04:57 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:46.211 11:04:57 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.471 11:04:57 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:46.471 11:04:57 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:46.730 11:04:57 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:46.990 11:04:58 -- target/nvmf_lvol.sh@29 -- # lvs=514cf68f-3bf8-4517-bae4-838b147bc074 00:08:46.990 11:04:58 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 514cf68f-3bf8-4517-bae4-838b147bc074 lvol 20 00:08:47.249 11:04:58 -- target/nvmf_lvol.sh@32 -- # lvol=244190c6-e1cf-4545-9541-831e8a3739a5 00:08:47.249 11:04:58 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.508 11:04:58 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 244190c6-e1cf-4545-9541-831e8a3739a5 00:08:47.767 11:04:58 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.026 [2024-12-06 11:04:58.996055] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.026 11:04:59 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.285 11:04:59 -- target/nvmf_lvol.sh@42 -- # perf_pid=72421 00:08:48.285 11:04:59 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:48.285 11:04:59 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:49.221 11:05:00 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 244190c6-e1cf-4545-9541-831e8a3739a5 MY_SNAPSHOT 00:08:49.480 11:05:00 -- target/nvmf_lvol.sh@47 -- # snapshot=7e02e296-c7e0-405c-a1d7-0bdfbb78bfa7 00:08:49.480 11:05:00 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 244190c6-e1cf-4545-9541-831e8a3739a5 30 00:08:49.739 11:05:00 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7e02e296-c7e0-405c-a1d7-0bdfbb78bfa7 MY_CLONE 00:08:49.998 11:05:01 -- target/nvmf_lvol.sh@49 -- # clone=8f401b6a-b95a-4116-a26b-5df2914e8586 00:08:49.998 11:05:01 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8f401b6a-b95a-4116-a26b-5df2914e8586 00:08:50.564 11:05:01 -- target/nvmf_lvol.sh@53 -- # wait 72421 00:08:58.742 Initializing NVMe Controllers 00:08:58.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:58.742 Controller IO queue size 128, less than required. 00:08:58.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:58.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:58.742 Initialization complete. Launching workers. 00:08:58.742 ======================================================== 00:08:58.742 Latency(us) 00:08:58.742 Device Information : IOPS MiB/s Average min max 00:08:58.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10152.59 39.66 12616.67 530.63 62220.77 00:08:58.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9992.40 39.03 12820.98 2658.09 59298.06 00:08:58.742 ======================================================== 00:08:58.742 Total : 20144.99 78.69 12718.01 530.63 62220.77 00:08:58.742 00:08:58.742 11:05:09 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.742 11:05:09 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 244190c6-e1cf-4545-9541-831e8a3739a5 00:08:58.999 11:05:10 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 514cf68f-3bf8-4517-bae4-838b147bc074 00:08:59.257 11:05:10 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:59.257 11:05:10 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:59.257 11:05:10 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:59.257 11:05:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.257 11:05:10 -- nvmf/common.sh@116 -- # sync 00:08:59.257 11:05:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:59.257 11:05:10 -- nvmf/common.sh@119 -- # set +e 00:08:59.257 11:05:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.257 11:05:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:59.257 rmmod nvme_tcp 00:08:59.257 rmmod nvme_fabrics 00:08:59.257 rmmod nvme_keyring 00:08:59.257 11:05:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.257 11:05:10 -- nvmf/common.sh@123 -- # set -e 00:08:59.257 11:05:10 -- nvmf/common.sh@124 -- # return 0 00:08:59.257 11:05:10 -- nvmf/common.sh@477 -- # '[' -n 72340 ']' 00:08:59.257 11:05:10 -- nvmf/common.sh@478 -- # killprocess 72340 00:08:59.257 11:05:10 -- common/autotest_common.sh@936 -- # '[' -z 72340 ']' 00:08:59.257 11:05:10 -- common/autotest_common.sh@940 -- # kill -0 72340 00:08:59.257 11:05:10 -- common/autotest_common.sh@941 -- # uname 00:08:59.257 11:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.257 11:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72340 00:08:59.515 killing process with pid 72340 00:08:59.515 11:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:59.515 11:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:59.515 11:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72340' 00:08:59.515 11:05:10 -- common/autotest_common.sh@955 -- # kill 72340 00:08:59.515 11:05:10 -- common/autotest_common.sh@960 -- # wait 72340 00:08:59.515 11:05:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:59.515 11:05:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:59.515 11:05:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:59.515 11:05:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.515 11:05:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:59.515 11:05:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.515 11:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.515 11:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.515 11:05:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:59.515 ************************************ 00:08:59.515 END TEST nvmf_lvol 00:08:59.515 ************************************ 00:08:59.515 00:08:59.515 real 0m15.570s 00:08:59.515 user 1m4.544s 00:08:59.515 sys 0m4.488s 00:08:59.515 11:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.515 11:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.774 11:05:10 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.774 11:05:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.774 11:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.774 11:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.774 ************************************ 00:08:59.774 START TEST nvmf_lvs_grow 00:08:59.774 ************************************ 00:08:59.774 11:05:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.774 * Looking for test storage... 00:08:59.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:59.774 11:05:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:59.774 11:05:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:59.774 11:05:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:59.774 11:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:59.774 11:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:59.774 11:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:59.774 11:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:59.774 11:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:59.774 11:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:59.774 11:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.774 11:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:59.774 11:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:59.774 11:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:59.774 11:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:59.774 11:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:59.774 11:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:59.774 11:05:10 -- scripts/common.sh@344 -- # : 1 00:08:59.774 11:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:59.774 11:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.774 11:05:10 -- scripts/common.sh@364 -- # decimal 1 00:08:59.774 11:05:10 -- scripts/common.sh@352 -- # local d=1 00:08:59.774 11:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.774 11:05:10 -- scripts/common.sh@354 -- # echo 1 00:08:59.774 11:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:59.774 11:05:10 -- scripts/common.sh@365 -- # decimal 2 00:08:59.774 11:05:10 -- scripts/common.sh@352 -- # local d=2 00:08:59.774 11:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.774 11:05:10 -- scripts/common.sh@354 -- # echo 2 00:08:59.775 11:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:59.775 11:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:59.775 11:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:59.775 11:05:10 -- scripts/common.sh@367 -- # return 0 00:08:59.775 11:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.775 11:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:59.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.775 --rc genhtml_branch_coverage=1 00:08:59.775 --rc genhtml_function_coverage=1 00:08:59.775 --rc genhtml_legend=1 00:08:59.775 --rc geninfo_all_blocks=1 00:08:59.775 --rc geninfo_unexecuted_blocks=1 00:08:59.775 00:08:59.775 ' 00:08:59.775 11:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:59.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.775 --rc genhtml_branch_coverage=1 00:08:59.775 --rc genhtml_function_coverage=1 00:08:59.775 --rc genhtml_legend=1 00:08:59.775 --rc geninfo_all_blocks=1 00:08:59.775 --rc geninfo_unexecuted_blocks=1 00:08:59.775 00:08:59.775 ' 00:08:59.775 11:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:59.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.775 --rc genhtml_branch_coverage=1 00:08:59.775 --rc genhtml_function_coverage=1 00:08:59.775 --rc genhtml_legend=1 00:08:59.775 --rc geninfo_all_blocks=1 00:08:59.775 --rc geninfo_unexecuted_blocks=1 00:08:59.775 00:08:59.775 ' 00:08:59.775 11:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:59.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.775 --rc genhtml_branch_coverage=1 00:08:59.775 --rc genhtml_function_coverage=1 00:08:59.775 --rc genhtml_legend=1 00:08:59.775 --rc geninfo_all_blocks=1 00:08:59.775 --rc geninfo_unexecuted_blocks=1 00:08:59.775 00:08:59.775 ' 00:08:59.775 11:05:10 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:59.775 11:05:10 -- nvmf/common.sh@7 -- # uname -s 00:08:59.775 11:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.775 11:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.775 11:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.775 11:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.775 11:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.775 11:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.775 11:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.775 11:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.775 11:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.775 11:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:59.775 11:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:08:59.775 11:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.775 11:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.775 11:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:59.775 11:05:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.775 11:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.775 11:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.775 11:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.775 11:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.775 11:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.775 11:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.775 11:05:10 -- paths/export.sh@5 -- # export PATH 00:08:59.775 11:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.775 11:05:10 -- nvmf/common.sh@46 -- # : 0 00:08:59.775 11:05:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:59.775 11:05:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:59.775 11:05:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:59.775 11:05:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.775 11:05:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.775 11:05:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:59.775 11:05:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:59.775 11:05:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:59.775 11:05:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.775 11:05:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:59.775 11:05:10 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:59.775 11:05:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:59.775 11:05:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.775 11:05:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:59.775 11:05:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:59.775 11:05:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:59.775 11:05:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.775 11:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.775 11:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.775 11:05:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:59.775 11:05:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:59.775 11:05:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.775 11:05:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.775 11:05:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:59.775 11:05:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:59.775 11:05:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:59.775 11:05:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:59.775 11:05:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:59.775 11:05:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.775 11:05:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:59.775 11:05:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:59.775 11:05:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:59.775 11:05:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:59.775 11:05:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:59.775 11:05:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:59.775 Cannot find device "nvmf_tgt_br" 00:08:59.775 11:05:10 -- nvmf/common.sh@154 -- # true 00:08:59.775 11:05:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.035 Cannot find device "nvmf_tgt_br2" 00:09:00.035 11:05:10 -- nvmf/common.sh@155 -- # true 00:09:00.035 11:05:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:00.035 11:05:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:00.035 Cannot find device "nvmf_tgt_br" 00:09:00.035 11:05:10 -- nvmf/common.sh@157 -- # true 00:09:00.035 11:05:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:00.035 Cannot find device "nvmf_tgt_br2" 00:09:00.035 11:05:10 -- nvmf/common.sh@158 -- # true 00:09:00.035 11:05:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:00.035 11:05:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:00.035 11:05:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.035 11:05:11 -- nvmf/common.sh@161 -- # true 00:09:00.035 11:05:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.035 11:05:11 -- nvmf/common.sh@162 -- # true 00:09:00.035 11:05:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.035 11:05:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.035 11:05:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.035 11:05:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.035 11:05:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.035 11:05:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.035 11:05:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.035 11:05:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:00.035 11:05:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:00.035 11:05:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:00.035 11:05:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:00.035 11:05:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:00.035 11:05:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:00.035 11:05:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.035 11:05:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.295 11:05:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.295 11:05:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:00.295 11:05:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:00.295 11:05:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.295 11:05:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.295 11:05:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.295 11:05:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.295 11:05:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.295 11:05:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:00.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:00.295 00:09:00.295 --- 10.0.0.2 ping statistics --- 00:09:00.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.295 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:00.295 11:05:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:00.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:00.295 00:09:00.295 --- 10.0.0.3 ping statistics --- 00:09:00.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.295 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:00.295 11:05:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:00.295 00:09:00.295 --- 10.0.0.1 ping statistics --- 00:09:00.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.295 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:00.295 11:05:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.295 11:05:11 -- nvmf/common.sh@421 -- # return 0 00:09:00.295 11:05:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:00.295 11:05:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.295 11:05:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:00.295 11:05:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:00.295 11:05:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.295 11:05:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:00.295 11:05:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:00.295 11:05:11 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:00.295 11:05:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:00.295 11:05:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.295 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 11:05:11 -- nvmf/common.sh@469 -- # nvmfpid=72751 00:09:00.295 11:05:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:00.295 11:05:11 -- nvmf/common.sh@470 -- # waitforlisten 72751 00:09:00.295 11:05:11 -- common/autotest_common.sh@829 -- # '[' -z 72751 ']' 00:09:00.295 11:05:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.295 11:05:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.295 11:05:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.295 11:05:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.295 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:00.295 [2024-12-06 11:05:11.333238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.295 [2024-12-06 11:05:11.333331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.553 [2024-12-06 11:05:11.471329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.553 [2024-12-06 11:05:11.514093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.553 [2024-12-06 11:05:11.514276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.553 [2024-12-06 11:05:11.514293] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.553 [2024-12-06 11:05:11.514304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.553 [2024-12-06 11:05:11.514344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.501 11:05:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.501 11:05:12 -- common/autotest_common.sh@862 -- # return 0 00:09:01.501 11:05:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:01.501 11:05:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.501 11:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:01.501 11:05:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.501 11:05:12 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:01.761 [2024-12-06 11:05:12.666800] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:01.761 11:05:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.761 11:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.761 11:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 ************************************ 00:09:01.761 START TEST lvs_grow_clean 00:09:01.761 ************************************ 00:09:01.761 11:05:12 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.761 11:05:12 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.019 11:05:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:02.019 11:05:13 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:02.278 11:05:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:02.278 11:05:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:02.278 11:05:13 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:02.537 11:05:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:02.537 11:05:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:02.537 11:05:13 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28ede39b-b854-42e9-9eb7-3aee87db4699 lvol 150 00:09:02.796 11:05:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d164a7d1-4c8e-477c-af9a-ceef61133728 00:09:02.796 11:05:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:02.796 11:05:13 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:03.054 [2024-12-06 11:05:13.995332] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:03.054 [2024-12-06 11:05:13.995442] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:03.054 true 00:09:03.054 11:05:14 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:03.054 11:05:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:03.312 11:05:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:03.312 11:05:14 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.571 11:05:14 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d164a7d1-4c8e-477c-af9a-ceef61133728 00:09:03.830 11:05:14 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:04.093 [2024-12-06 11:05:14.983995] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.093 11:05:15 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.093 11:05:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72839 00:09:04.093 11:05:15 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:04.093 11:05:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.093 11:05:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72839 /var/tmp/bdevperf.sock 00:09:04.093 11:05:15 -- common/autotest_common.sh@829 -- # '[' -z 72839 ']' 00:09:04.093 11:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.093 11:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.093 11:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.093 11:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.093 11:05:15 -- common/autotest_common.sh@10 -- # set +x 00:09:04.351 [2024-12-06 11:05:15.277404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.351 [2024-12-06 11:05:15.277511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72839 ] 00:09:04.351 [2024-12-06 11:05:15.423347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.351 [2024-12-06 11:05:15.462581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.300 11:05:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.300 11:05:16 -- common/autotest_common.sh@862 -- # return 0 00:09:05.300 11:05:16 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.558 Nvme0n1 00:09:05.558 11:05:16 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.817 [ 00:09:05.818 { 00:09:05.818 "name": "Nvme0n1", 00:09:05.818 "aliases": [ 00:09:05.818 "d164a7d1-4c8e-477c-af9a-ceef61133728" 00:09:05.818 ], 00:09:05.818 "product_name": "NVMe disk", 00:09:05.818 "block_size": 4096, 00:09:05.818 "num_blocks": 38912, 00:09:05.818 "uuid": "d164a7d1-4c8e-477c-af9a-ceef61133728", 00:09:05.818 "assigned_rate_limits": { 00:09:05.818 "rw_ios_per_sec": 0, 00:09:05.818 "rw_mbytes_per_sec": 0, 00:09:05.818 "r_mbytes_per_sec": 0, 00:09:05.818 "w_mbytes_per_sec": 0 00:09:05.818 }, 00:09:05.818 "claimed": false, 00:09:05.818 "zoned": false, 00:09:05.818 "supported_io_types": { 00:09:05.818 "read": true, 00:09:05.818 "write": true, 00:09:05.818 "unmap": true, 00:09:05.818 "write_zeroes": true, 00:09:05.818 "flush": true, 00:09:05.818 "reset": true, 00:09:05.818 "compare": true, 00:09:05.818 "compare_and_write": true, 00:09:05.818 "abort": true, 00:09:05.818 "nvme_admin": true, 00:09:05.818 "nvme_io": true 00:09:05.818 }, 00:09:05.818 "driver_specific": { 00:09:05.818 "nvme": [ 00:09:05.818 { 00:09:05.818 "trid": { 00:09:05.818 "trtype": "TCP", 00:09:05.818 "adrfam": "IPv4", 00:09:05.818 "traddr": "10.0.0.2", 00:09:05.818 "trsvcid": "4420", 00:09:05.818 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.818 }, 00:09:05.818 "ctrlr_data": { 00:09:05.818 "cntlid": 1, 00:09:05.818 "vendor_id": "0x8086", 00:09:05.818 "model_number": "SPDK bdev Controller", 00:09:05.818 "serial_number": "SPDK0", 00:09:05.818 "firmware_revision": "24.01.1", 00:09:05.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.818 "oacs": { 00:09:05.818 "security": 0, 00:09:05.818 "format": 0, 00:09:05.818 "firmware": 0, 00:09:05.818 "ns_manage": 0 00:09:05.818 }, 00:09:05.818 "multi_ctrlr": true, 00:09:05.818 "ana_reporting": false 00:09:05.818 }, 00:09:05.818 "vs": { 00:09:05.818 "nvme_version": "1.3" 00:09:05.818 }, 00:09:05.818 "ns_data": { 00:09:05.818 "id": 1, 00:09:05.818 "can_share": true 00:09:05.818 } 00:09:05.818 } 00:09:05.818 ], 00:09:05.818 "mp_policy": "active_passive" 00:09:05.818 } 00:09:05.818 } 00:09:05.818 ] 00:09:05.818 11:05:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72857 00:09:05.818 11:05:16 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.818 11:05:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:05.818 Running I/O for 10 seconds... 00:09:06.754 Latency(us) 00:09:06.754 [2024-12-06T11:05:17.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.754 [2024-12-06T11:05:17.901Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.754 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:06.754 [2024-12-06T11:05:17.901Z] =================================================================================================================== 00:09:06.754 [2024-12-06T11:05:17.901Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:06.754 00:09:07.694 11:05:18 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:07.952 [2024-12-06T11:05:19.099Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.952 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:07.952 [2024-12-06T11:05:19.099Z] =================================================================================================================== 00:09:07.952 [2024-12-06T11:05:19.099Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:07.952 00:09:07.952 true 00:09:08.211 11:05:19 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:08.211 11:05:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:08.470 11:05:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:08.470 11:05:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:08.470 11:05:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 72857 00:09:08.729 [2024-12-06T11:05:19.876Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.729 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:08.729 [2024-12-06T11:05:19.876Z] =================================================================================================================== 00:09:08.729 [2024-12-06T11:05:19.876Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:08.729 00:09:10.150 [2024-12-06T11:05:21.297Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.150 Nvme0n1 : 4.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:10.150 [2024-12-06T11:05:21.297Z] =================================================================================================================== 00:09:10.150 [2024-12-06T11:05:21.297Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:10.150 00:09:11.086 [2024-12-06T11:05:22.233Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.086 Nvme0n1 : 5.00 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:09:11.086 [2024-12-06T11:05:22.233Z] =================================================================================================================== 00:09:11.086 [2024-12-06T11:05:22.233Z] Total : 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:09:11.086 00:09:12.023 [2024-12-06T11:05:23.170Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.023 Nvme0n1 : 6.00 6752.17 26.38 0.00 0.00 0.00 0.00 0.00 00:09:12.023 [2024-12-06T11:05:23.170Z] =================================================================================================================== 00:09:12.023 [2024-12-06T11:05:23.170Z] Total : 6752.17 26.38 0.00 0.00 0.00 0.00 0.00 00:09:12.023 00:09:12.960 [2024-12-06T11:05:24.107Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.960 Nvme0n1 : 7.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:12.960 [2024-12-06T11:05:24.107Z] =================================================================================================================== 00:09:12.960 [2024-12-06T11:05:24.107Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:12.960 00:09:13.897 [2024-12-06T11:05:25.044Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.897 Nvme0n1 : 8.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:13.897 [2024-12-06T11:05:25.044Z] =================================================================================================================== 00:09:13.897 [2024-12-06T11:05:25.044Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:13.897 00:09:14.835 [2024-12-06T11:05:25.982Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.835 Nvme0n1 : 9.00 6674.56 26.07 0.00 0.00 0.00 0.00 0.00 00:09:14.835 [2024-12-06T11:05:25.982Z] =================================================================================================================== 00:09:14.835 [2024-12-06T11:05:25.982Z] Total : 6674.56 26.07 0.00 0.00 0.00 0.00 0.00 00:09:14.835 00:09:15.770 [2024-12-06T11:05:26.917Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.770 Nvme0n1 : 10.00 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:15.770 [2024-12-06T11:05:26.917Z] =================================================================================================================== 00:09:15.770 [2024-12-06T11:05:26.917Z] Total : 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:15.770 00:09:15.770 00:09:15.770 Latency(us) 00:09:15.770 [2024-12-06T11:05:26.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.770 [2024-12-06T11:05:26.917Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.770 Nvme0n1 : 10.02 6654.07 25.99 0.00 0.00 19229.58 16086.11 42181.35 00:09:15.770 [2024-12-06T11:05:26.917Z] =================================================================================================================== 00:09:15.770 [2024-12-06T11:05:26.917Z] Total : 6654.07 25.99 0.00 0.00 19229.58 16086.11 42181.35 00:09:15.770 0 00:09:15.770 11:05:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72839 00:09:15.770 11:05:26 -- common/autotest_common.sh@936 -- # '[' -z 72839 ']' 00:09:15.770 11:05:26 -- common/autotest_common.sh@940 -- # kill -0 72839 00:09:15.770 11:05:26 -- common/autotest_common.sh@941 -- # uname 00:09:16.028 11:05:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.028 11:05:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72839 00:09:16.028 11:05:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:16.028 11:05:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:16.028 killing process with pid 72839 00:09:16.028 11:05:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72839' 00:09:16.028 11:05:26 -- common/autotest_common.sh@955 -- # kill 72839 00:09:16.028 Received shutdown signal, test time was about 10.000000 seconds 00:09:16.028 00:09:16.028 Latency(us) 00:09:16.028 [2024-12-06T11:05:27.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.028 [2024-12-06T11:05:27.175Z] =================================================================================================================== 00:09:16.028 [2024-12-06T11:05:27.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:16.028 11:05:26 -- common/autotest_common.sh@960 -- # wait 72839 00:09:16.028 11:05:27 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.285 11:05:27 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:16.285 11:05:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:16.543 11:05:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:16.543 11:05:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:16.543 11:05:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.802 [2024-12-06 11:05:27.907193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.802 11:05:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:16.802 11:05:27 -- common/autotest_common.sh@650 -- # local es=0 00:09:16.802 11:05:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:16.802 11:05:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.802 11:05:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.802 11:05:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.802 11:05:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.802 11:05:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.802 11:05:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.802 11:05:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.802 11:05:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:16.802 11:05:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:17.062 request: 00:09:17.062 { 00:09:17.062 "uuid": "28ede39b-b854-42e9-9eb7-3aee87db4699", 00:09:17.062 "method": "bdev_lvol_get_lvstores", 00:09:17.062 "req_id": 1 00:09:17.062 } 00:09:17.062 Got JSON-RPC error response 00:09:17.062 response: 00:09:17.062 { 00:09:17.062 "code": -19, 00:09:17.062 "message": "No such device" 00:09:17.062 } 00:09:17.321 11:05:28 -- common/autotest_common.sh@653 -- # es=1 00:09:17.321 11:05:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.321 11:05:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.321 11:05:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.321 11:05:28 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.580 aio_bdev 00:09:17.580 11:05:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d164a7d1-4c8e-477c-af9a-ceef61133728 00:09:17.580 11:05:28 -- common/autotest_common.sh@897 -- # local bdev_name=d164a7d1-4c8e-477c-af9a-ceef61133728 00:09:17.580 11:05:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:17.580 11:05:28 -- common/autotest_common.sh@899 -- # local i 00:09:17.580 11:05:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:17.580 11:05:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:17.580 11:05:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.580 11:05:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d164a7d1-4c8e-477c-af9a-ceef61133728 -t 2000 00:09:17.839 [ 00:09:17.839 { 00:09:17.839 "name": "d164a7d1-4c8e-477c-af9a-ceef61133728", 00:09:17.839 "aliases": [ 00:09:17.839 "lvs/lvol" 00:09:17.839 ], 00:09:17.839 "product_name": "Logical Volume", 00:09:17.839 "block_size": 4096, 00:09:17.839 "num_blocks": 38912, 00:09:17.839 "uuid": "d164a7d1-4c8e-477c-af9a-ceef61133728", 00:09:17.839 "assigned_rate_limits": { 00:09:17.839 "rw_ios_per_sec": 0, 00:09:17.839 "rw_mbytes_per_sec": 0, 00:09:17.839 "r_mbytes_per_sec": 0, 00:09:17.839 "w_mbytes_per_sec": 0 00:09:17.839 }, 00:09:17.839 "claimed": false, 00:09:17.839 "zoned": false, 00:09:17.839 "supported_io_types": { 00:09:17.839 "read": true, 00:09:17.839 "write": true, 00:09:17.839 "unmap": true, 00:09:17.839 "write_zeroes": true, 00:09:17.839 "flush": false, 00:09:17.839 "reset": true, 00:09:17.839 "compare": false, 00:09:17.839 "compare_and_write": false, 00:09:17.839 "abort": false, 00:09:17.839 "nvme_admin": false, 00:09:17.839 "nvme_io": false 00:09:17.839 }, 00:09:17.839 "driver_specific": { 00:09:17.839 "lvol": { 00:09:17.839 "lvol_store_uuid": "28ede39b-b854-42e9-9eb7-3aee87db4699", 00:09:17.839 "base_bdev": "aio_bdev", 00:09:17.839 "thin_provision": false, 00:09:17.839 "snapshot": false, 00:09:17.839 "clone": false, 00:09:17.839 "esnap_clone": false 00:09:17.839 } 00:09:17.839 } 00:09:17.839 } 00:09:17.839 ] 00:09:17.839 11:05:28 -- common/autotest_common.sh@905 -- # return 0 00:09:17.839 11:05:28 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:17.839 11:05:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:18.098 11:05:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:18.098 11:05:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:18.098 11:05:29 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:18.356 11:05:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:18.356 11:05:29 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d164a7d1-4c8e-477c-af9a-ceef61133728 00:09:18.616 11:05:29 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28ede39b-b854-42e9-9eb7-3aee87db4699 00:09:18.875 11:05:29 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.133 11:05:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.701 ************************************ 00:09:19.701 END TEST lvs_grow_clean 00:09:19.701 ************************************ 00:09:19.701 00:09:19.701 real 0m17.896s 00:09:19.701 user 0m16.853s 00:09:19.701 sys 0m2.440s 00:09:19.701 11:05:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.701 11:05:30 -- common/autotest_common.sh@10 -- # set +x 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:19.701 11:05:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:19.701 11:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.701 11:05:30 -- common/autotest_common.sh@10 -- # set +x 00:09:19.701 ************************************ 00:09:19.701 START TEST lvs_grow_dirty 00:09:19.701 ************************************ 00:09:19.701 11:05:30 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.701 11:05:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:19.960 11:05:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:19.960 11:05:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.219 11:05:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:20.220 11:05:31 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:20.220 11:05:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.478 11:05:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.478 11:05:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.478 11:05:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a41a3282-005b-435e-b33a-b9aecd95edb3 lvol 150 00:09:20.737 11:05:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bc297039-3854-4290-9372-865a7d58bd90 00:09:20.737 11:05:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:20.737 11:05:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:20.996 [2024-12-06 11:05:31.961406] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:20.996 [2024-12-06 11:05:31.961562] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:20.996 true 00:09:20.996 11:05:31 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:20.996 11:05:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.255 11:05:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.255 11:05:32 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.512 11:05:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc297039-3854-4290-9372-865a7d58bd90 00:09:21.769 11:05:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:22.028 11:05:32 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.288 11:05:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73102 00:09:22.288 11:05:33 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.288 11:05:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.288 11:05:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73102 /var/tmp/bdevperf.sock 00:09:22.288 11:05:33 -- common/autotest_common.sh@829 -- # '[' -z 73102 ']' 00:09:22.288 11:05:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.288 11:05:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.288 11:05:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.288 11:05:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.288 11:05:33 -- common/autotest_common.sh@10 -- # set +x 00:09:22.288 [2024-12-06 11:05:33.234948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:22.288 [2024-12-06 11:05:33.235069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73102 ] 00:09:22.288 [2024-12-06 11:05:33.378517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.288 [2024-12-06 11:05:33.420429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.222 11:05:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.222 11:05:34 -- common/autotest_common.sh@862 -- # return 0 00:09:23.222 11:05:34 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.479 Nvme0n1 00:09:23.479 11:05:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.736 [ 00:09:23.736 { 00:09:23.736 "name": "Nvme0n1", 00:09:23.736 "aliases": [ 00:09:23.736 "bc297039-3854-4290-9372-865a7d58bd90" 00:09:23.736 ], 00:09:23.736 "product_name": "NVMe disk", 00:09:23.736 "block_size": 4096, 00:09:23.736 "num_blocks": 38912, 00:09:23.736 "uuid": "bc297039-3854-4290-9372-865a7d58bd90", 00:09:23.736 "assigned_rate_limits": { 00:09:23.736 "rw_ios_per_sec": 0, 00:09:23.736 "rw_mbytes_per_sec": 0, 00:09:23.736 "r_mbytes_per_sec": 0, 00:09:23.736 "w_mbytes_per_sec": 0 00:09:23.736 }, 00:09:23.736 "claimed": false, 00:09:23.736 "zoned": false, 00:09:23.736 "supported_io_types": { 00:09:23.736 "read": true, 00:09:23.736 "write": true, 00:09:23.736 "unmap": true, 00:09:23.736 "write_zeroes": true, 00:09:23.736 "flush": true, 00:09:23.736 "reset": true, 00:09:23.736 "compare": true, 00:09:23.736 "compare_and_write": true, 00:09:23.736 "abort": true, 00:09:23.736 "nvme_admin": true, 00:09:23.736 "nvme_io": true 00:09:23.736 }, 00:09:23.736 "driver_specific": { 00:09:23.736 "nvme": [ 00:09:23.736 { 00:09:23.736 "trid": { 00:09:23.736 "trtype": "TCP", 00:09:23.736 "adrfam": "IPv4", 00:09:23.736 "traddr": "10.0.0.2", 00:09:23.736 "trsvcid": "4420", 00:09:23.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.736 }, 00:09:23.736 "ctrlr_data": { 00:09:23.736 "cntlid": 1, 00:09:23.736 "vendor_id": "0x8086", 00:09:23.736 "model_number": "SPDK bdev Controller", 00:09:23.736 "serial_number": "SPDK0", 00:09:23.736 "firmware_revision": "24.01.1", 00:09:23.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.736 "oacs": { 00:09:23.736 "security": 0, 00:09:23.736 "format": 0, 00:09:23.736 "firmware": 0, 00:09:23.736 "ns_manage": 0 00:09:23.736 }, 00:09:23.736 "multi_ctrlr": true, 00:09:23.736 "ana_reporting": false 00:09:23.736 }, 00:09:23.736 "vs": { 00:09:23.736 "nvme_version": "1.3" 00:09:23.736 }, 00:09:23.736 "ns_data": { 00:09:23.736 "id": 1, 00:09:23.736 "can_share": true 00:09:23.736 } 00:09:23.736 } 00:09:23.736 ], 00:09:23.737 "mp_policy": "active_passive" 00:09:23.737 } 00:09:23.737 } 00:09:23.737 ] 00:09:23.737 11:05:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73126 00:09:23.737 11:05:34 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.737 11:05:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.737 Running I/O for 10 seconds... 00:09:25.134 Latency(us) 00:09:25.134 [2024-12-06T11:05:36.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.134 [2024-12-06T11:05:36.281Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.134 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:25.134 [2024-12-06T11:05:36.281Z] =================================================================================================================== 00:09:25.134 [2024-12-06T11:05:36.281Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:25.134 00:09:25.699 11:05:36 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:25.957 [2024-12-06T11:05:37.104Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.957 Nvme0n1 : 2.00 6472.50 25.28 0.00 0.00 0.00 0.00 0.00 00:09:25.957 [2024-12-06T11:05:37.104Z] =================================================================================================================== 00:09:25.957 [2024-12-06T11:05:37.104Z] Total : 6472.50 25.28 0.00 0.00 0.00 0.00 0.00 00:09:25.957 00:09:25.957 true 00:09:25.957 11:05:37 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:25.957 11:05:37 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.523 11:05:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.523 11:05:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.523 11:05:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 73126 00:09:26.781 [2024-12-06T11:05:37.928Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.781 Nvme0n1 : 3.00 6347.00 24.79 0.00 0.00 0.00 0.00 0.00 00:09:26.781 [2024-12-06T11:05:37.928Z] =================================================================================================================== 00:09:26.781 [2024-12-06T11:05:37.928Z] Total : 6347.00 24.79 0.00 0.00 0.00 0.00 0.00 00:09:26.781 00:09:28.158 [2024-12-06T11:05:39.305Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.158 Nvme0n1 : 4.00 6379.50 24.92 0.00 0.00 0.00 0.00 0.00 00:09:28.158 [2024-12-06T11:05:39.305Z] =================================================================================================================== 00:09:28.158 [2024-12-06T11:05:39.305Z] Total : 6379.50 24.92 0.00 0.00 0.00 0.00 0.00 00:09:28.158 00:09:28.725 [2024-12-06T11:05:39.872Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.725 Nvme0n1 : 5.00 6373.60 24.90 0.00 0.00 0.00 0.00 0.00 00:09:28.725 [2024-12-06T11:05:39.872Z] =================================================================================================================== 00:09:28.725 [2024-12-06T11:05:39.872Z] Total : 6373.60 24.90 0.00 0.00 0.00 0.00 0.00 00:09:28.725 00:09:30.102 [2024-12-06T11:05:41.249Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.102 Nvme0n1 : 6.00 6390.83 24.96 0.00 0.00 0.00 0.00 0.00 00:09:30.102 [2024-12-06T11:05:41.249Z] =================================================================================================================== 00:09:30.102 [2024-12-06T11:05:41.249Z] Total : 6390.83 24.96 0.00 0.00 0.00 0.00 0.00 00:09:30.102 00:09:31.037 [2024-12-06T11:05:42.184Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.037 Nvme0n1 : 7.00 6366.86 24.87 0.00 0.00 0.00 0.00 0.00 00:09:31.037 [2024-12-06T11:05:42.184Z] =================================================================================================================== 00:09:31.037 [2024-12-06T11:05:42.184Z] Total : 6366.86 24.87 0.00 0.00 0.00 0.00 0.00 00:09:31.037 00:09:31.974 [2024-12-06T11:05:43.121Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.974 Nvme0n1 : 8.00 6294.12 24.59 0.00 0.00 0.00 0.00 0.00 00:09:31.974 [2024-12-06T11:05:43.121Z] =================================================================================================================== 00:09:31.974 [2024-12-06T11:05:43.121Z] Total : 6294.12 24.59 0.00 0.00 0.00 0.00 0.00 00:09:31.974 00:09:32.910 [2024-12-06T11:05:44.057Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.910 Nvme0n1 : 9.00 6286.22 24.56 0.00 0.00 0.00 0.00 0.00 00:09:32.910 [2024-12-06T11:05:44.057Z] =================================================================================================================== 00:09:32.910 [2024-12-06T11:05:44.057Z] Total : 6286.22 24.56 0.00 0.00 0.00 0.00 0.00 00:09:32.910 00:09:33.848 [2024-12-06T11:05:44.995Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.848 Nvme0n1 : 10.00 6267.20 24.48 0.00 0.00 0.00 0.00 0.00 00:09:33.848 [2024-12-06T11:05:44.995Z] =================================================================================================================== 00:09:33.848 [2024-12-06T11:05:44.995Z] Total : 6267.20 24.48 0.00 0.00 0.00 0.00 0.00 00:09:33.848 00:09:33.848 00:09:33.848 Latency(us) 00:09:33.848 [2024-12-06T11:05:44.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.848 [2024-12-06T11:05:44.995Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.848 Nvme0n1 : 10.01 6273.20 24.50 0.00 0.00 20399.14 4527.94 121062.87 00:09:33.848 [2024-12-06T11:05:44.995Z] =================================================================================================================== 00:09:33.848 [2024-12-06T11:05:44.995Z] Total : 6273.20 24.50 0.00 0.00 20399.14 4527.94 121062.87 00:09:33.848 0 00:09:33.848 11:05:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73102 00:09:33.848 11:05:44 -- common/autotest_common.sh@936 -- # '[' -z 73102 ']' 00:09:33.848 11:05:44 -- common/autotest_common.sh@940 -- # kill -0 73102 00:09:33.849 11:05:44 -- common/autotest_common.sh@941 -- # uname 00:09:33.849 11:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:33.849 11:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73102 00:09:33.849 11:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:33.849 11:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:33.849 killing process with pid 73102 00:09:33.849 11:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73102' 00:09:33.849 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.849 00:09:33.849 Latency(us) 00:09:33.849 [2024-12-06T11:05:44.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.849 [2024-12-06T11:05:44.996Z] =================================================================================================================== 00:09:33.849 [2024-12-06T11:05:44.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.849 11:05:44 -- common/autotest_common.sh@955 -- # kill 73102 00:09:33.849 11:05:44 -- common/autotest_common.sh@960 -- # wait 73102 00:09:34.108 11:05:45 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.365 11:05:45 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:34.365 11:05:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72751 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 72751 00:09:34.624 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72751 Killed "${NVMF_APP[@]}" "$@" 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:34.624 11:05:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:34.624 11:05:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:34.624 11:05:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.624 11:05:45 -- common/autotest_common.sh@10 -- # set +x 00:09:34.624 11:05:45 -- nvmf/common.sh@469 -- # nvmfpid=73254 00:09:34.624 11:05:45 -- nvmf/common.sh@470 -- # waitforlisten 73254 00:09:34.624 11:05:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.624 11:05:45 -- common/autotest_common.sh@829 -- # '[' -z 73254 ']' 00:09:34.624 11:05:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.624 11:05:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.624 11:05:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.624 11:05:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.624 11:05:45 -- common/autotest_common.sh@10 -- # set +x 00:09:34.624 [2024-12-06 11:05:45.661471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.624 [2024-12-06 11:05:45.661606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.883 [2024-12-06 11:05:45.795739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.883 [2024-12-06 11:05:45.830282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.883 [2024-12-06 11:05:45.830464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.883 [2024-12-06 11:05:45.830492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.883 [2024-12-06 11:05:45.830500] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.883 [2024-12-06 11:05:45.830531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.819 11:05:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.819 11:05:46 -- common/autotest_common.sh@862 -- # return 0 00:09:35.819 11:05:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:35.819 11:05:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.819 11:05:46 -- common/autotest_common.sh@10 -- # set +x 00:09:35.819 11:05:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.819 11:05:46 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.819 [2024-12-06 11:05:46.909660] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.819 [2024-12-06 11:05:46.910019] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.819 [2024-12-06 11:05:46.910271] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.819 11:05:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:35.819 11:05:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev bc297039-3854-4290-9372-865a7d58bd90 00:09:35.819 11:05:46 -- common/autotest_common.sh@897 -- # local bdev_name=bc297039-3854-4290-9372-865a7d58bd90 00:09:35.819 11:05:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:35.819 11:05:46 -- common/autotest_common.sh@899 -- # local i 00:09:35.819 11:05:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:35.819 11:05:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:35.819 11:05:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.078 11:05:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc297039-3854-4290-9372-865a7d58bd90 -t 2000 00:09:36.337 [ 00:09:36.337 { 00:09:36.337 "name": "bc297039-3854-4290-9372-865a7d58bd90", 00:09:36.337 "aliases": [ 00:09:36.337 "lvs/lvol" 00:09:36.337 ], 00:09:36.337 "product_name": "Logical Volume", 00:09:36.337 "block_size": 4096, 00:09:36.337 "num_blocks": 38912, 00:09:36.337 "uuid": "bc297039-3854-4290-9372-865a7d58bd90", 00:09:36.337 "assigned_rate_limits": { 00:09:36.337 "rw_ios_per_sec": 0, 00:09:36.337 "rw_mbytes_per_sec": 0, 00:09:36.337 "r_mbytes_per_sec": 0, 00:09:36.337 "w_mbytes_per_sec": 0 00:09:36.337 }, 00:09:36.337 "claimed": false, 00:09:36.337 "zoned": false, 00:09:36.337 "supported_io_types": { 00:09:36.337 "read": true, 00:09:36.337 "write": true, 00:09:36.337 "unmap": true, 00:09:36.337 "write_zeroes": true, 00:09:36.337 "flush": false, 00:09:36.337 "reset": true, 00:09:36.337 "compare": false, 00:09:36.337 "compare_and_write": false, 00:09:36.337 "abort": false, 00:09:36.337 "nvme_admin": false, 00:09:36.337 "nvme_io": false 00:09:36.337 }, 00:09:36.337 "driver_specific": { 00:09:36.337 "lvol": { 00:09:36.337 "lvol_store_uuid": "a41a3282-005b-435e-b33a-b9aecd95edb3", 00:09:36.337 "base_bdev": "aio_bdev", 00:09:36.337 "thin_provision": false, 00:09:36.337 "snapshot": false, 00:09:36.337 "clone": false, 00:09:36.337 "esnap_clone": false 00:09:36.337 } 00:09:36.337 } 00:09:36.337 } 00:09:36.337 ] 00:09:36.337 11:05:47 -- common/autotest_common.sh@905 -- # return 0 00:09:36.337 11:05:47 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:36.337 11:05:47 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:36.596 11:05:47 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:36.596 11:05:47 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:36.596 11:05:47 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:36.855 11:05:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:36.855 11:05:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.114 [2024-12-06 11:05:48.159502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.114 11:05:48 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:37.114 11:05:48 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.114 11:05:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:37.114 11:05:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.114 11:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 11:05:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.114 11:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 11:05:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.114 11:05:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.114 11:05:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.114 11:05:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.114 11:05:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:37.373 request: 00:09:37.373 { 00:09:37.373 "uuid": "a41a3282-005b-435e-b33a-b9aecd95edb3", 00:09:37.373 "method": "bdev_lvol_get_lvstores", 00:09:37.373 "req_id": 1 00:09:37.373 } 00:09:37.373 Got JSON-RPC error response 00:09:37.373 response: 00:09:37.373 { 00:09:37.373 "code": -19, 00:09:37.373 "message": "No such device" 00:09:37.373 } 00:09:37.373 11:05:48 -- common/autotest_common.sh@653 -- # es=1 00:09:37.373 11:05:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.373 11:05:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.373 11:05:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.373 11:05:48 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.632 aio_bdev 00:09:37.632 11:05:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bc297039-3854-4290-9372-865a7d58bd90 00:09:37.632 11:05:48 -- common/autotest_common.sh@897 -- # local bdev_name=bc297039-3854-4290-9372-865a7d58bd90 00:09:37.632 11:05:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:37.632 11:05:48 -- common/autotest_common.sh@899 -- # local i 00:09:37.632 11:05:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:37.632 11:05:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:37.632 11:05:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.890 11:05:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc297039-3854-4290-9372-865a7d58bd90 -t 2000 00:09:38.149 [ 00:09:38.149 { 00:09:38.149 "name": "bc297039-3854-4290-9372-865a7d58bd90", 00:09:38.149 "aliases": [ 00:09:38.149 "lvs/lvol" 00:09:38.149 ], 00:09:38.149 "product_name": "Logical Volume", 00:09:38.149 "block_size": 4096, 00:09:38.149 "num_blocks": 38912, 00:09:38.149 "uuid": "bc297039-3854-4290-9372-865a7d58bd90", 00:09:38.149 "assigned_rate_limits": { 00:09:38.149 "rw_ios_per_sec": 0, 00:09:38.149 "rw_mbytes_per_sec": 0, 00:09:38.149 "r_mbytes_per_sec": 0, 00:09:38.149 "w_mbytes_per_sec": 0 00:09:38.149 }, 00:09:38.149 "claimed": false, 00:09:38.149 "zoned": false, 00:09:38.149 "supported_io_types": { 00:09:38.149 "read": true, 00:09:38.149 "write": true, 00:09:38.149 "unmap": true, 00:09:38.149 "write_zeroes": true, 00:09:38.149 "flush": false, 00:09:38.149 "reset": true, 00:09:38.149 "compare": false, 00:09:38.149 "compare_and_write": false, 00:09:38.149 "abort": false, 00:09:38.149 "nvme_admin": false, 00:09:38.149 "nvme_io": false 00:09:38.149 }, 00:09:38.149 "driver_specific": { 00:09:38.149 "lvol": { 00:09:38.149 "lvol_store_uuid": "a41a3282-005b-435e-b33a-b9aecd95edb3", 00:09:38.149 "base_bdev": "aio_bdev", 00:09:38.149 "thin_provision": false, 00:09:38.149 "snapshot": false, 00:09:38.149 "clone": false, 00:09:38.149 "esnap_clone": false 00:09:38.149 } 00:09:38.149 } 00:09:38.149 } 00:09:38.149 ] 00:09:38.149 11:05:49 -- common/autotest_common.sh@905 -- # return 0 00:09:38.149 11:05:49 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:38.149 11:05:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:38.408 11:05:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:38.408 11:05:49 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:38.408 11:05:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:38.669 11:05:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:38.669 11:05:49 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bc297039-3854-4290-9372-865a7d58bd90 00:09:38.932 11:05:50 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a41a3282-005b-435e-b33a-b9aecd95edb3 00:09:39.191 11:05:50 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.450 11:05:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:40.018 ************************************ 00:09:40.018 END TEST lvs_grow_dirty 00:09:40.018 ************************************ 00:09:40.018 00:09:40.018 real 0m20.261s 00:09:40.018 user 0m40.657s 00:09:40.018 sys 0m9.219s 00:09:40.019 11:05:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.019 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:40.019 11:05:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:40.019 11:05:50 -- common/autotest_common.sh@806 -- # type=--id 00:09:40.019 11:05:50 -- common/autotest_common.sh@807 -- # id=0 00:09:40.019 11:05:50 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:40.019 11:05:50 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:40.019 11:05:50 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:40.019 11:05:50 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:40.019 11:05:50 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:40.019 11:05:50 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:40.019 nvmf_trace.0 00:09:40.019 11:05:50 -- common/autotest_common.sh@821 -- # return 0 00:09:40.019 11:05:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:40.019 11:05:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:40.019 11:05:50 -- nvmf/common.sh@116 -- # sync 00:09:40.586 11:05:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:40.586 11:05:51 -- nvmf/common.sh@119 -- # set +e 00:09:40.586 11:05:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:40.586 11:05:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:40.586 rmmod nvme_tcp 00:09:40.586 rmmod nvme_fabrics 00:09:40.586 rmmod nvme_keyring 00:09:40.586 11:05:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:40.586 11:05:51 -- nvmf/common.sh@123 -- # set -e 00:09:40.586 11:05:51 -- nvmf/common.sh@124 -- # return 0 00:09:40.586 11:05:51 -- nvmf/common.sh@477 -- # '[' -n 73254 ']' 00:09:40.586 11:05:51 -- nvmf/common.sh@478 -- # killprocess 73254 00:09:40.586 11:05:51 -- common/autotest_common.sh@936 -- # '[' -z 73254 ']' 00:09:40.586 11:05:51 -- common/autotest_common.sh@940 -- # kill -0 73254 00:09:40.586 11:05:51 -- common/autotest_common.sh@941 -- # uname 00:09:40.586 11:05:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.586 11:05:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73254 00:09:40.586 11:05:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.586 killing process with pid 73254 00:09:40.586 11:05:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.586 11:05:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73254' 00:09:40.586 11:05:51 -- common/autotest_common.sh@955 -- # kill 73254 00:09:40.586 11:05:51 -- common/autotest_common.sh@960 -- # wait 73254 00:09:40.586 11:05:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:40.586 11:05:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:40.586 11:05:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:40.586 11:05:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.586 11:05:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:40.586 11:05:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.586 11:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.586 11:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.844 11:05:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:40.844 00:09:40.844 real 0m41.056s 00:09:40.844 user 1m4.356s 00:09:40.844 sys 0m12.663s 00:09:40.844 11:05:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.844 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:09:40.844 ************************************ 00:09:40.844 END TEST nvmf_lvs_grow 00:09:40.844 ************************************ 00:09:40.844 11:05:51 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.844 11:05:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:40.844 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:09:40.844 ************************************ 00:09:40.844 START TEST nvmf_bdev_io_wait 00:09:40.844 ************************************ 00:09:40.844 11:05:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.844 * Looking for test storage... 00:09:40.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.844 11:05:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:40.844 11:05:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:40.844 11:05:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:40.844 11:05:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:40.844 11:05:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:40.844 11:05:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:40.844 11:05:51 -- scripts/common.sh@335 -- # IFS=.-: 00:09:40.844 11:05:51 -- scripts/common.sh@335 -- # read -ra ver1 00:09:40.844 11:05:51 -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.844 11:05:51 -- scripts/common.sh@336 -- # read -ra ver2 00:09:40.844 11:05:51 -- scripts/common.sh@337 -- # local 'op=<' 00:09:40.844 11:05:51 -- scripts/common.sh@339 -- # ver1_l=2 00:09:40.844 11:05:51 -- scripts/common.sh@340 -- # ver2_l=1 00:09:40.844 11:05:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:40.844 11:05:51 -- scripts/common.sh@343 -- # case "$op" in 00:09:40.844 11:05:51 -- scripts/common.sh@344 -- # : 1 00:09:40.844 11:05:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:40.844 11:05:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.844 11:05:51 -- scripts/common.sh@364 -- # decimal 1 00:09:40.844 11:05:51 -- scripts/common.sh@352 -- # local d=1 00:09:40.844 11:05:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.844 11:05:51 -- scripts/common.sh@354 -- # echo 1 00:09:40.844 11:05:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:40.844 11:05:51 -- scripts/common.sh@365 -- # decimal 2 00:09:40.844 11:05:51 -- scripts/common.sh@352 -- # local d=2 00:09:40.844 11:05:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.844 11:05:51 -- scripts/common.sh@354 -- # echo 2 00:09:40.844 11:05:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:40.844 11:05:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:40.844 11:05:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:40.844 11:05:51 -- scripts/common.sh@367 -- # return 0 00:09:40.844 11:05:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:40.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.844 --rc genhtml_branch_coverage=1 00:09:40.844 --rc genhtml_function_coverage=1 00:09:40.844 --rc genhtml_legend=1 00:09:40.844 --rc geninfo_all_blocks=1 00:09:40.844 --rc geninfo_unexecuted_blocks=1 00:09:40.844 00:09:40.844 ' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:40.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.844 --rc genhtml_branch_coverage=1 00:09:40.844 --rc genhtml_function_coverage=1 00:09:40.844 --rc genhtml_legend=1 00:09:40.844 --rc geninfo_all_blocks=1 00:09:40.844 --rc geninfo_unexecuted_blocks=1 00:09:40.844 00:09:40.844 ' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:40.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.844 --rc genhtml_branch_coverage=1 00:09:40.844 --rc genhtml_function_coverage=1 00:09:40.844 --rc genhtml_legend=1 00:09:40.844 --rc geninfo_all_blocks=1 00:09:40.844 --rc geninfo_unexecuted_blocks=1 00:09:40.844 00:09:40.844 ' 00:09:40.844 11:05:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:40.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.844 --rc genhtml_branch_coverage=1 00:09:40.844 --rc genhtml_function_coverage=1 00:09:40.844 --rc genhtml_legend=1 00:09:40.844 --rc geninfo_all_blocks=1 00:09:40.844 --rc geninfo_unexecuted_blocks=1 00:09:40.844 00:09:40.844 ' 00:09:40.844 11:05:51 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.844 11:05:51 -- nvmf/common.sh@7 -- # uname -s 00:09:40.844 11:05:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.844 11:05:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.844 11:05:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.844 11:05:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.844 11:05:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.844 11:05:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.844 11:05:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.844 11:05:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.844 11:05:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.844 11:05:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.844 11:05:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:40.844 11:05:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:40.844 11:05:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.844 11:05:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.845 11:05:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.845 11:05:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.845 11:05:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.845 11:05:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.845 11:05:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.845 11:05:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 11:05:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 11:05:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 11:05:51 -- paths/export.sh@5 -- # export PATH 00:09:40.845 11:05:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 11:05:51 -- nvmf/common.sh@46 -- # : 0 00:09:40.845 11:05:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:40.845 11:05:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:40.845 11:05:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:40.845 11:05:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.845 11:05:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.845 11:05:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:41.102 11:05:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:41.102 11:05:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:41.102 11:05:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.102 11:05:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.102 11:05:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.102 11:05:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:41.102 11:05:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.102 11:05:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:41.102 11:05:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:41.102 11:05:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:41.102 11:05:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.102 11:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.102 11:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.102 11:05:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:41.102 11:05:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:41.102 11:05:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:41.102 11:05:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:41.102 11:05:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:41.102 11:05:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:41.102 11:05:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.102 11:05:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.102 11:05:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.102 11:05:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:41.102 11:05:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.102 11:05:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.102 11:05:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.102 11:05:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.102 11:05:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.102 11:05:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.102 11:05:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.102 11:05:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.102 11:05:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:41.102 11:05:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:41.102 Cannot find device "nvmf_tgt_br" 00:09:41.102 11:05:52 -- nvmf/common.sh@154 -- # true 00:09:41.102 11:05:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.102 Cannot find device "nvmf_tgt_br2" 00:09:41.102 11:05:52 -- nvmf/common.sh@155 -- # true 00:09:41.102 11:05:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:41.102 11:05:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:41.102 Cannot find device "nvmf_tgt_br" 00:09:41.102 11:05:52 -- nvmf/common.sh@157 -- # true 00:09:41.102 11:05:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:41.102 Cannot find device "nvmf_tgt_br2" 00:09:41.102 11:05:52 -- nvmf/common.sh@158 -- # true 00:09:41.102 11:05:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:41.102 11:05:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:41.103 11:05:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.103 11:05:52 -- nvmf/common.sh@161 -- # true 00:09:41.103 11:05:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.103 11:05:52 -- nvmf/common.sh@162 -- # true 00:09:41.103 11:05:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.103 11:05:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.103 11:05:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.103 11:05:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.103 11:05:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.103 11:05:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.103 11:05:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.103 11:05:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:41.103 11:05:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:41.103 11:05:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:41.103 11:05:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:41.103 11:05:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:41.103 11:05:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:41.361 11:05:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.361 11:05:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.361 11:05:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.361 11:05:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:41.361 11:05:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:41.361 11:05:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.361 11:05:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.361 11:05:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.361 11:05:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.361 11:05:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.361 11:05:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:41.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:09:41.361 00:09:41.361 --- 10.0.0.2 ping statistics --- 00:09:41.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.361 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:41.361 11:05:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:41.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:41.361 00:09:41.361 --- 10.0.0.3 ping statistics --- 00:09:41.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.361 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:41.361 11:05:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:41.361 00:09:41.361 --- 10.0.0.1 ping statistics --- 00:09:41.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.361 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:41.361 11:05:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.361 11:05:52 -- nvmf/common.sh@421 -- # return 0 00:09:41.361 11:05:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:41.361 11:05:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.361 11:05:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:41.361 11:05:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:41.361 11:05:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.361 11:05:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:41.361 11:05:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:41.361 11:05:52 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:41.361 11:05:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:41.361 11:05:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.361 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.361 11:05:52 -- nvmf/common.sh@469 -- # nvmfpid=73583 00:09:41.361 11:05:52 -- nvmf/common.sh@470 -- # waitforlisten 73583 00:09:41.361 11:05:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:41.361 11:05:52 -- common/autotest_common.sh@829 -- # '[' -z 73583 ']' 00:09:41.361 11:05:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.361 11:05:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.361 11:05:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.361 11:05:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.361 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.361 [2024-12-06 11:05:52.431643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.361 [2024-12-06 11:05:52.431750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.619 [2024-12-06 11:05:52.571355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.619 [2024-12-06 11:05:52.604818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.619 [2024-12-06 11:05:52.604984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.619 [2024-12-06 11:05:52.604996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.619 [2024-12-06 11:05:52.605004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.619 [2024-12-06 11:05:52.605384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.619 [2024-12-06 11:05:52.605564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.619 [2024-12-06 11:05:52.607579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.619 [2024-12-06 11:05:52.607597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.619 11:05:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.619 11:05:52 -- common/autotest_common.sh@862 -- # return 0 00:09:41.619 11:05:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:41.619 11:05:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.619 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.619 11:05:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.619 11:05:52 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:41.619 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.619 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.619 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.619 11:05:52 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:41.619 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.619 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.877 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.877 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 [2024-12-06 11:05:52.784892] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.877 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.877 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 Malloc0 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.877 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.877 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.877 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.877 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.877 11:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.877 11:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.877 [2024-12-06 11:05:52.839592] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.877 11:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73606 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@30 -- # READ_PID=73608 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # config=() 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.877 11:05:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.877 { 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme$subsystem", 00:09:41.877 "trtype": "$TEST_TRANSPORT", 00:09:41.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "$NVMF_PORT", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.877 "hdgst": ${hdgst:-false}, 00:09:41.877 "ddgst": ${ddgst:-false} 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 } 00:09:41.877 EOF 00:09:41.877 )") 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # config=() 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.877 11:05:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.877 { 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme$subsystem", 00:09:41.877 "trtype": "$TEST_TRANSPORT", 00:09:41.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "$NVMF_PORT", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.877 "hdgst": ${hdgst:-false}, 00:09:41.877 "ddgst": ${ddgst:-false} 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 } 00:09:41.877 EOF 00:09:41.877 )") 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73610 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # cat 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # cat 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # config=() 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73615 00:09:41.877 11:05:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.877 { 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme$subsystem", 00:09:41.877 "trtype": "$TEST_TRANSPORT", 00:09:41.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "$NVMF_PORT", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.877 "hdgst": ${hdgst:-false}, 00:09:41.877 "ddgst": ${ddgst:-false} 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 } 00:09:41.877 EOF 00:09:41.877 )") 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@35 -- # sync 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # config=() 00:09:41.877 11:05:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.877 11:05:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.877 { 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme$subsystem", 00:09:41.877 "trtype": "$TEST_TRANSPORT", 00:09:41.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "$NVMF_PORT", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.877 "hdgst": ${hdgst:-false}, 00:09:41.877 "ddgst": ${ddgst:-false} 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 } 00:09:41.877 EOF 00:09:41.877 )") 00:09:41.877 11:05:52 -- nvmf/common.sh@544 -- # jq . 00:09:41.877 11:05:52 -- nvmf/common.sh@544 -- # jq . 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:41.877 11:05:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.877 11:05:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.877 11:05:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme1", 00:09:41.877 "trtype": "tcp", 00:09:41.877 "traddr": "10.0.0.2", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "4420", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.877 "hdgst": false, 00:09:41.877 "ddgst": false 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 }' 00:09:41.877 11:05:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme1", 00:09:41.877 "trtype": "tcp", 00:09:41.877 "traddr": "10.0.0.2", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "4420", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.877 "hdgst": false, 00:09:41.877 "ddgst": false 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 }' 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # cat 00:09:41.877 11:05:52 -- nvmf/common.sh@542 -- # cat 00:09:41.877 11:05:52 -- nvmf/common.sh@544 -- # jq . 00:09:41.877 11:05:52 -- nvmf/common.sh@544 -- # jq . 00:09:41.877 11:05:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.877 11:05:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme1", 00:09:41.877 "trtype": "tcp", 00:09:41.877 "traddr": "10.0.0.2", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "4420", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.877 "hdgst": false, 00:09:41.877 "ddgst": false 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 }' 00:09:41.877 11:05:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.877 11:05:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.877 "params": { 00:09:41.877 "name": "Nvme1", 00:09:41.877 "trtype": "tcp", 00:09:41.877 "traddr": "10.0.0.2", 00:09:41.877 "adrfam": "ipv4", 00:09:41.877 "trsvcid": "4420", 00:09:41.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.877 "hdgst": false, 00:09:41.877 "ddgst": false 00:09:41.877 }, 00:09:41.877 "method": "bdev_nvme_attach_controller" 00:09:41.877 }' 00:09:41.877 [2024-12-06 11:05:52.895494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.877 [2024-12-06 11:05:52.895728] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.877 [2024-12-06 11:05:52.897647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.877 [2024-12-06 11:05:52.897718] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:41.877 11:05:52 -- target/bdev_io_wait.sh@37 -- # wait 73606 00:09:41.877 [2024-12-06 11:05:52.915822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.877 [2024-12-06 11:05:52.915898] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:41.877 [2024-12-06 11:05:52.921885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.877 [2024-12-06 11:05:52.921963] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:42.134 [2024-12-06 11:05:53.072102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.134 [2024-12-06 11:05:53.097212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:42.134 [2024-12-06 11:05:53.113288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.134 [2024-12-06 11:05:53.138987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.134 [2024-12-06 11:05:53.161365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.134 [2024-12-06 11:05:53.186295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.134 [2024-12-06 11:05:53.206975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.134 Running I/O for 1 seconds... 00:09:42.134 [2024-12-06 11:05:53.232278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.134 Running I/O for 1 seconds... 00:09:42.391 Running I/O for 1 seconds... 00:09:42.391 Running I/O for 1 seconds... 00:09:43.325 00:09:43.325 Latency(us) 00:09:43.325 [2024-12-06T11:05:54.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.325 [2024-12-06T11:05:54.472Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.325 Nvme1n1 : 1.02 6374.97 24.90 0.00 0.00 19873.67 8519.68 39321.60 00:09:43.325 [2024-12-06T11:05:54.472Z] =================================================================================================================== 00:09:43.325 [2024-12-06T11:05:54.472Z] Total : 6374.97 24.90 0.00 0.00 19873.67 8519.68 39321.60 00:09:43.325 00:09:43.325 Latency(us) 00:09:43.325 [2024-12-06T11:05:54.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.325 [2024-12-06T11:05:54.472Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:43.325 Nvme1n1 : 1.01 9029.60 35.27 0.00 0.00 14108.29 8400.52 27525.12 00:09:43.325 [2024-12-06T11:05:54.472Z] =================================================================================================================== 00:09:43.325 [2024-12-06T11:05:54.472Z] Total : 9029.60 35.27 0.00 0.00 14108.29 8400.52 27525.12 00:09:43.325 00:09:43.325 Latency(us) 00:09:43.325 [2024-12-06T11:05:54.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.325 [2024-12-06T11:05:54.472Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:43.325 Nvme1n1 : 1.00 155186.39 606.20 0.00 0.00 821.83 348.16 1980.97 00:09:43.325 [2024-12-06T11:05:54.472Z] =================================================================================================================== 00:09:43.325 [2024-12-06T11:05:54.472Z] Total : 155186.39 606.20 0.00 0.00 821.83 348.16 1980.97 00:09:43.325 00:09:43.325 Latency(us) 00:09:43.325 [2024-12-06T11:05:54.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.325 [2024-12-06T11:05:54.472Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:43.325 Nvme1n1 : 1.01 6204.04 24.23 0.00 0.00 20551.47 7000.44 41228.10 00:09:43.325 [2024-12-06T11:05:54.472Z] =================================================================================================================== 00:09:43.325 [2024-12-06T11:05:54.472Z] Total : 6204.04 24.23 0.00 0.00 20551.47 7000.44 41228.10 00:09:43.325 11:05:54 -- target/bdev_io_wait.sh@38 -- # wait 73608 00:09:43.583 11:05:54 -- target/bdev_io_wait.sh@39 -- # wait 73610 00:09:43.583 11:05:54 -- target/bdev_io_wait.sh@40 -- # wait 73615 00:09:43.583 11:05:54 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.583 11:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.583 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:09:43.583 11:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.583 11:05:54 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:43.583 11:05:54 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:43.583 11:05:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:43.583 11:05:54 -- nvmf/common.sh@116 -- # sync 00:09:43.583 11:05:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:43.583 11:05:54 -- nvmf/common.sh@119 -- # set +e 00:09:43.583 11:05:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:43.583 11:05:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:43.583 rmmod nvme_tcp 00:09:43.583 rmmod nvme_fabrics 00:09:43.583 rmmod nvme_keyring 00:09:43.583 11:05:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:43.583 11:05:54 -- nvmf/common.sh@123 -- # set -e 00:09:43.583 11:05:54 -- nvmf/common.sh@124 -- # return 0 00:09:43.583 11:05:54 -- nvmf/common.sh@477 -- # '[' -n 73583 ']' 00:09:43.583 11:05:54 -- nvmf/common.sh@478 -- # killprocess 73583 00:09:43.583 11:05:54 -- common/autotest_common.sh@936 -- # '[' -z 73583 ']' 00:09:43.583 11:05:54 -- common/autotest_common.sh@940 -- # kill -0 73583 00:09:43.583 11:05:54 -- common/autotest_common.sh@941 -- # uname 00:09:43.583 11:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.583 11:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73583 00:09:43.583 killing process with pid 73583 00:09:43.583 11:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:43.583 11:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:43.583 11:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73583' 00:09:43.583 11:05:54 -- common/autotest_common.sh@955 -- # kill 73583 00:09:43.583 11:05:54 -- common/autotest_common.sh@960 -- # wait 73583 00:09:43.841 11:05:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:43.841 11:05:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:43.841 11:05:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:43.841 11:05:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.841 11:05:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:43.841 11:05:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.841 11:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.841 11:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.841 11:05:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:43.841 00:09:43.841 real 0m3.062s 00:09:43.841 user 0m12.974s 00:09:43.841 sys 0m1.926s 00:09:43.841 11:05:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.841 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:09:43.841 ************************************ 00:09:43.841 END TEST nvmf_bdev_io_wait 00:09:43.841 ************************************ 00:09:43.841 11:05:54 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.841 11:05:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:43.841 11:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.841 11:05:54 -- common/autotest_common.sh@10 -- # set +x 00:09:43.841 ************************************ 00:09:43.841 START TEST nvmf_queue_depth 00:09:43.841 ************************************ 00:09:43.841 11:05:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.841 * Looking for test storage... 00:09:43.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.841 11:05:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:43.841 11:05:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:43.841 11:05:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:44.099 11:05:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:44.099 11:05:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:44.099 11:05:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:44.099 11:05:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:44.099 11:05:55 -- scripts/common.sh@335 -- # IFS=.-: 00:09:44.099 11:05:55 -- scripts/common.sh@335 -- # read -ra ver1 00:09:44.099 11:05:55 -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.099 11:05:55 -- scripts/common.sh@336 -- # read -ra ver2 00:09:44.099 11:05:55 -- scripts/common.sh@337 -- # local 'op=<' 00:09:44.099 11:05:55 -- scripts/common.sh@339 -- # ver1_l=2 00:09:44.099 11:05:55 -- scripts/common.sh@340 -- # ver2_l=1 00:09:44.099 11:05:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:44.099 11:05:55 -- scripts/common.sh@343 -- # case "$op" in 00:09:44.099 11:05:55 -- scripts/common.sh@344 -- # : 1 00:09:44.099 11:05:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:44.099 11:05:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.099 11:05:55 -- scripts/common.sh@364 -- # decimal 1 00:09:44.099 11:05:55 -- scripts/common.sh@352 -- # local d=1 00:09:44.099 11:05:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.099 11:05:55 -- scripts/common.sh@354 -- # echo 1 00:09:44.099 11:05:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:44.099 11:05:55 -- scripts/common.sh@365 -- # decimal 2 00:09:44.099 11:05:55 -- scripts/common.sh@352 -- # local d=2 00:09:44.100 11:05:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.100 11:05:55 -- scripts/common.sh@354 -- # echo 2 00:09:44.100 11:05:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:44.100 11:05:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:44.100 11:05:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:44.100 11:05:55 -- scripts/common.sh@367 -- # return 0 00:09:44.100 11:05:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.100 11:05:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:44.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.100 --rc genhtml_branch_coverage=1 00:09:44.100 --rc genhtml_function_coverage=1 00:09:44.100 --rc genhtml_legend=1 00:09:44.100 --rc geninfo_all_blocks=1 00:09:44.100 --rc geninfo_unexecuted_blocks=1 00:09:44.100 00:09:44.100 ' 00:09:44.100 11:05:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:44.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.100 --rc genhtml_branch_coverage=1 00:09:44.100 --rc genhtml_function_coverage=1 00:09:44.100 --rc genhtml_legend=1 00:09:44.100 --rc geninfo_all_blocks=1 00:09:44.100 --rc geninfo_unexecuted_blocks=1 00:09:44.100 00:09:44.100 ' 00:09:44.100 11:05:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:44.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.100 --rc genhtml_branch_coverage=1 00:09:44.100 --rc genhtml_function_coverage=1 00:09:44.100 --rc genhtml_legend=1 00:09:44.100 --rc geninfo_all_blocks=1 00:09:44.100 --rc geninfo_unexecuted_blocks=1 00:09:44.100 00:09:44.100 ' 00:09:44.100 11:05:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:44.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.100 --rc genhtml_branch_coverage=1 00:09:44.100 --rc genhtml_function_coverage=1 00:09:44.100 --rc genhtml_legend=1 00:09:44.100 --rc geninfo_all_blocks=1 00:09:44.100 --rc geninfo_unexecuted_blocks=1 00:09:44.100 00:09:44.100 ' 00:09:44.100 11:05:55 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.100 11:05:55 -- nvmf/common.sh@7 -- # uname -s 00:09:44.100 11:05:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.100 11:05:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.100 11:05:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.100 11:05:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.100 11:05:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.100 11:05:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.100 11:05:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.100 11:05:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.100 11:05:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.100 11:05:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:44.100 11:05:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:44.100 11:05:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.100 11:05:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.100 11:05:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.100 11:05:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.100 11:05:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.100 11:05:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.100 11:05:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.100 11:05:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.100 11:05:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.100 11:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.100 11:05:55 -- paths/export.sh@5 -- # export PATH 00:09:44.100 11:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.100 11:05:55 -- nvmf/common.sh@46 -- # : 0 00:09:44.100 11:05:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:44.100 11:05:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:44.100 11:05:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:44.100 11:05:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.100 11:05:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.100 11:05:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:44.100 11:05:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:44.100 11:05:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:44.100 11:05:55 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:44.100 11:05:55 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:44.100 11:05:55 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:44.100 11:05:55 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:44.100 11:05:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:44.100 11:05:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.100 11:05:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:44.100 11:05:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:44.100 11:05:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:44.100 11:05:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.100 11:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.100 11:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.100 11:05:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:44.100 11:05:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:44.100 11:05:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.100 11:05:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.100 11:05:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:44.100 11:05:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:44.100 11:05:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.100 11:05:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.100 11:05:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.100 11:05:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.100 11:05:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.100 11:05:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.100 11:05:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.100 11:05:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.100 11:05:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:44.100 11:05:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:44.100 Cannot find device "nvmf_tgt_br" 00:09:44.100 11:05:55 -- nvmf/common.sh@154 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.100 Cannot find device "nvmf_tgt_br2" 00:09:44.100 11:05:55 -- nvmf/common.sh@155 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:44.100 11:05:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:44.100 Cannot find device "nvmf_tgt_br" 00:09:44.100 11:05:55 -- nvmf/common.sh@157 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:44.100 Cannot find device "nvmf_tgt_br2" 00:09:44.100 11:05:55 -- nvmf/common.sh@158 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:44.100 11:05:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:44.100 11:05:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.100 11:05:55 -- nvmf/common.sh@161 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.100 11:05:55 -- nvmf/common.sh@162 -- # true 00:09:44.100 11:05:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.100 11:05:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.100 11:05:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.100 11:05:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.358 11:05:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.358 11:05:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.358 11:05:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.358 11:05:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:44.358 11:05:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:44.358 11:05:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:44.358 11:05:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:44.358 11:05:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:44.358 11:05:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:44.358 11:05:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.358 11:05:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.358 11:05:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.358 11:05:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:44.358 11:05:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:44.358 11:05:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.358 11:05:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.358 11:05:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.358 11:05:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.359 11:05:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.359 11:05:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:44.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:44.359 00:09:44.359 --- 10.0.0.2 ping statistics --- 00:09:44.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.359 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:44.359 11:05:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:44.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:44.359 00:09:44.359 --- 10.0.0.3 ping statistics --- 00:09:44.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.359 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:44.359 11:05:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:44.359 00:09:44.359 --- 10.0.0.1 ping statistics --- 00:09:44.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.359 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:44.359 11:05:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.359 11:05:55 -- nvmf/common.sh@421 -- # return 0 00:09:44.359 11:05:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:44.359 11:05:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.359 11:05:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:44.359 11:05:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:44.359 11:05:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.359 11:05:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:44.359 11:05:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:44.359 11:05:55 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:44.359 11:05:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:44.359 11:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:44.359 11:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:44.359 11:05:55 -- nvmf/common.sh@469 -- # nvmfpid=73824 00:09:44.359 11:05:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.359 11:05:55 -- nvmf/common.sh@470 -- # waitforlisten 73824 00:09:44.359 11:05:55 -- common/autotest_common.sh@829 -- # '[' -z 73824 ']' 00:09:44.359 11:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.359 11:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.359 11:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.359 11:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.359 11:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:44.359 [2024-12-06 11:05:55.478290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.359 [2024-12-06 11:05:55.478391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.616 [2024-12-06 11:05:55.618694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.616 [2024-12-06 11:05:55.659692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.616 [2024-12-06 11:05:55.659877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.616 [2024-12-06 11:05:55.659892] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.616 [2024-12-06 11:05:55.659903] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.616 [2024-12-06 11:05:55.659942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.549 11:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.549 11:05:56 -- common/autotest_common.sh@862 -- # return 0 00:09:45.549 11:05:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:45.549 11:05:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 11:05:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.549 11:05:56 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.549 11:05:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [2024-12-06 11:05:56.501079] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.549 11:05:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 11:05:56 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.549 11:05:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 Malloc0 00:09:45.549 11:05:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 11:05:56 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.549 11:05:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 11:05:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 11:05:56 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.549 11:05:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 11:05:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 11:05:56 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.549 11:05:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [2024-12-06 11:05:56.560283] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.549 11:05:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 11:05:56 -- target/queue_depth.sh@30 -- # bdevperf_pid=73857 00:09:45.549 11:05:56 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:45.549 11:05:56 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:45.549 11:05:56 -- target/queue_depth.sh@33 -- # waitforlisten 73857 /var/tmp/bdevperf.sock 00:09:45.549 11:05:56 -- common/autotest_common.sh@829 -- # '[' -z 73857 ']' 00:09:45.549 11:05:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.549 11:05:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.549 11:05:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.549 11:05:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.549 11:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [2024-12-06 11:05:56.617326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:45.549 [2024-12-06 11:05:56.617421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73857 ] 00:09:45.807 [2024-12-06 11:05:56.759798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.807 [2024-12-06 11:05:56.800095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.741 11:05:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.741 11:05:57 -- common/autotest_common.sh@862 -- # return 0 00:09:46.741 11:05:57 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:46.741 11:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.741 11:05:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.741 NVMe0n1 00:09:46.741 11:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.741 11:05:57 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:46.741 Running I/O for 10 seconds... 00:09:56.725 00:09:56.725 Latency(us) 00:09:56.725 [2024-12-06T11:06:07.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.726 [2024-12-06T11:06:07.873Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:56.726 Verification LBA range: start 0x0 length 0x4000 00:09:56.726 NVMe0n1 : 10.06 15482.94 60.48 0.00 0.00 65897.93 13941.29 58148.31 00:09:56.726 [2024-12-06T11:06:07.873Z] =================================================================================================================== 00:09:56.726 [2024-12-06T11:06:07.873Z] Total : 15482.94 60.48 0.00 0.00 65897.93 13941.29 58148.31 00:09:56.726 0 00:09:56.726 11:06:07 -- target/queue_depth.sh@39 -- # killprocess 73857 00:09:56.726 11:06:07 -- common/autotest_common.sh@936 -- # '[' -z 73857 ']' 00:09:56.726 11:06:07 -- common/autotest_common.sh@940 -- # kill -0 73857 00:09:56.726 11:06:07 -- common/autotest_common.sh@941 -- # uname 00:09:56.726 11:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:56.726 11:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73857 00:09:56.986 killing process with pid 73857 00:09:56.986 Received shutdown signal, test time was about 10.000000 seconds 00:09:56.986 00:09:56.986 Latency(us) 00:09:56.986 [2024-12-06T11:06:08.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.986 [2024-12-06T11:06:08.133Z] =================================================================================================================== 00:09:56.986 [2024-12-06T11:06:08.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:56.986 11:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:56.986 11:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:56.986 11:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73857' 00:09:56.986 11:06:07 -- common/autotest_common.sh@955 -- # kill 73857 00:09:56.986 11:06:07 -- common/autotest_common.sh@960 -- # wait 73857 00:09:56.986 11:06:08 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:56.986 11:06:08 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:56.986 11:06:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:56.986 11:06:08 -- nvmf/common.sh@116 -- # sync 00:09:56.986 11:06:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:56.986 11:06:08 -- nvmf/common.sh@119 -- # set +e 00:09:56.986 11:06:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:56.986 11:06:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:56.986 rmmod nvme_tcp 00:09:56.986 rmmod nvme_fabrics 00:09:56.986 rmmod nvme_keyring 00:09:57.245 11:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:57.246 11:06:08 -- nvmf/common.sh@123 -- # set -e 00:09:57.246 11:06:08 -- nvmf/common.sh@124 -- # return 0 00:09:57.246 11:06:08 -- nvmf/common.sh@477 -- # '[' -n 73824 ']' 00:09:57.246 11:06:08 -- nvmf/common.sh@478 -- # killprocess 73824 00:09:57.246 11:06:08 -- common/autotest_common.sh@936 -- # '[' -z 73824 ']' 00:09:57.246 11:06:08 -- common/autotest_common.sh@940 -- # kill -0 73824 00:09:57.246 11:06:08 -- common/autotest_common.sh@941 -- # uname 00:09:57.246 11:06:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:57.246 11:06:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73824 00:09:57.246 killing process with pid 73824 00:09:57.246 11:06:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:57.246 11:06:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:57.246 11:06:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73824' 00:09:57.246 11:06:08 -- common/autotest_common.sh@955 -- # kill 73824 00:09:57.246 11:06:08 -- common/autotest_common.sh@960 -- # wait 73824 00:09:57.246 11:06:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:57.246 11:06:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:57.246 11:06:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:57.246 11:06:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.246 11:06:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:57.246 11:06:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.246 11:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.246 11:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.246 11:06:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:57.246 ************************************ 00:09:57.246 END TEST nvmf_queue_depth 00:09:57.246 ************************************ 00:09:57.246 00:09:57.246 real 0m13.452s 00:09:57.246 user 0m23.536s 00:09:57.246 sys 0m1.934s 00:09:57.246 11:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.246 11:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.505 11:06:08 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.505 11:06:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.505 11:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.505 ************************************ 00:09:57.505 START TEST nvmf_multipath 00:09:57.505 ************************************ 00:09:57.505 11:06:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.505 * Looking for test storage... 00:09:57.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.505 11:06:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:57.505 11:06:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:57.505 11:06:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:57.505 11:06:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:57.505 11:06:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:57.505 11:06:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:57.505 11:06:08 -- scripts/common.sh@335 -- # IFS=.-: 00:09:57.505 11:06:08 -- scripts/common.sh@335 -- # read -ra ver1 00:09:57.505 11:06:08 -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.505 11:06:08 -- scripts/common.sh@336 -- # read -ra ver2 00:09:57.505 11:06:08 -- scripts/common.sh@337 -- # local 'op=<' 00:09:57.505 11:06:08 -- scripts/common.sh@339 -- # ver1_l=2 00:09:57.505 11:06:08 -- scripts/common.sh@340 -- # ver2_l=1 00:09:57.505 11:06:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:57.505 11:06:08 -- scripts/common.sh@343 -- # case "$op" in 00:09:57.505 11:06:08 -- scripts/common.sh@344 -- # : 1 00:09:57.505 11:06:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:57.505 11:06:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.505 11:06:08 -- scripts/common.sh@364 -- # decimal 1 00:09:57.505 11:06:08 -- scripts/common.sh@352 -- # local d=1 00:09:57.505 11:06:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.505 11:06:08 -- scripts/common.sh@354 -- # echo 1 00:09:57.505 11:06:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:57.505 11:06:08 -- scripts/common.sh@365 -- # decimal 2 00:09:57.505 11:06:08 -- scripts/common.sh@352 -- # local d=2 00:09:57.505 11:06:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.505 11:06:08 -- scripts/common.sh@354 -- # echo 2 00:09:57.505 11:06:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:57.505 11:06:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:57.505 11:06:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:57.505 11:06:08 -- scripts/common.sh@367 -- # return 0 00:09:57.505 11:06:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:57.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.505 --rc genhtml_branch_coverage=1 00:09:57.505 --rc genhtml_function_coverage=1 00:09:57.505 --rc genhtml_legend=1 00:09:57.505 --rc geninfo_all_blocks=1 00:09:57.505 --rc geninfo_unexecuted_blocks=1 00:09:57.505 00:09:57.505 ' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:57.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.505 --rc genhtml_branch_coverage=1 00:09:57.505 --rc genhtml_function_coverage=1 00:09:57.505 --rc genhtml_legend=1 00:09:57.505 --rc geninfo_all_blocks=1 00:09:57.505 --rc geninfo_unexecuted_blocks=1 00:09:57.505 00:09:57.505 ' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:57.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.505 --rc genhtml_branch_coverage=1 00:09:57.505 --rc genhtml_function_coverage=1 00:09:57.505 --rc genhtml_legend=1 00:09:57.505 --rc geninfo_all_blocks=1 00:09:57.505 --rc geninfo_unexecuted_blocks=1 00:09:57.505 00:09:57.505 ' 00:09:57.505 11:06:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:57.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.505 --rc genhtml_branch_coverage=1 00:09:57.505 --rc genhtml_function_coverage=1 00:09:57.505 --rc genhtml_legend=1 00:09:57.505 --rc geninfo_all_blocks=1 00:09:57.505 --rc geninfo_unexecuted_blocks=1 00:09:57.505 00:09:57.505 ' 00:09:57.505 11:06:08 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.505 11:06:08 -- nvmf/common.sh@7 -- # uname -s 00:09:57.764 11:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.764 11:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.764 11:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.764 11:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.764 11:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.764 11:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.765 11:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.765 11:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.765 11:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.765 11:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.765 11:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:57.765 11:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:09:57.765 11:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.765 11:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.765 11:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.765 11:06:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.765 11:06:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.765 11:06:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.765 11:06:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.765 11:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.765 11:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.765 11:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.765 11:06:08 -- paths/export.sh@5 -- # export PATH 00:09:57.765 11:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.765 11:06:08 -- nvmf/common.sh@46 -- # : 0 00:09:57.765 11:06:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:57.765 11:06:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:57.765 11:06:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:57.765 11:06:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.765 11:06:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.765 11:06:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:57.765 11:06:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:57.765 11:06:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:57.765 11:06:08 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.765 11:06:08 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.765 11:06:08 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:57.765 11:06:08 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.765 11:06:08 -- target/multipath.sh@43 -- # nvmftestinit 00:09:57.765 11:06:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:57.765 11:06:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.765 11:06:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:57.765 11:06:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:57.765 11:06:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:57.765 11:06:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.765 11:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.765 11:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.765 11:06:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:57.765 11:06:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:57.765 11:06:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:57.765 11:06:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:57.765 11:06:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:57.765 11:06:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:57.765 11:06:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.765 11:06:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.765 11:06:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.765 11:06:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:57.765 11:06:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.765 11:06:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.765 11:06:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.765 11:06:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.765 11:06:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.765 11:06:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.765 11:06:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.765 11:06:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.765 11:06:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:57.765 11:06:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:57.765 Cannot find device "nvmf_tgt_br" 00:09:57.765 11:06:08 -- nvmf/common.sh@154 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.765 Cannot find device "nvmf_tgt_br2" 00:09:57.765 11:06:08 -- nvmf/common.sh@155 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:57.765 11:06:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:57.765 Cannot find device "nvmf_tgt_br" 00:09:57.765 11:06:08 -- nvmf/common.sh@157 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:57.765 Cannot find device "nvmf_tgt_br2" 00:09:57.765 11:06:08 -- nvmf/common.sh@158 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:57.765 11:06:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:57.765 11:06:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.765 11:06:08 -- nvmf/common.sh@161 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.765 11:06:08 -- nvmf/common.sh@162 -- # true 00:09:57.765 11:06:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.765 11:06:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.765 11:06:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.765 11:06:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.765 11:06:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.765 11:06:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.765 11:06:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.765 11:06:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.765 11:06:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.765 11:06:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:57.765 11:06:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:57.765 11:06:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:57.765 11:06:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:57.765 11:06:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.765 11:06:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.765 11:06:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.765 11:06:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:57.765 11:06:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:57.765 11:06:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.024 11:06:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.024 11:06:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.024 11:06:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.024 11:06:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.024 11:06:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:58.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:58.024 00:09:58.024 --- 10.0.0.2 ping statistics --- 00:09:58.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.024 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:58.024 11:06:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:58.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:58.024 00:09:58.024 --- 10.0.0.3 ping statistics --- 00:09:58.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.024 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:58.024 11:06:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:58.024 00:09:58.024 --- 10.0.0.1 ping statistics --- 00:09:58.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.024 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:58.024 11:06:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.024 11:06:08 -- nvmf/common.sh@421 -- # return 0 00:09:58.024 11:06:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:58.024 11:06:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.024 11:06:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:58.024 11:06:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:58.024 11:06:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.024 11:06:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:58.024 11:06:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:58.024 11:06:08 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:58.024 11:06:08 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:58.024 11:06:08 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:58.024 11:06:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:58.024 11:06:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.024 11:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:58.024 11:06:08 -- nvmf/common.sh@469 -- # nvmfpid=74186 00:09:58.024 11:06:09 -- nvmf/common.sh@470 -- # waitforlisten 74186 00:09:58.024 11:06:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.024 11:06:09 -- common/autotest_common.sh@829 -- # '[' -z 74186 ']' 00:09:58.024 11:06:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.024 11:06:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.024 11:06:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.024 11:06:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.024 11:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:58.024 [2024-12-06 11:06:09.048116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:58.024 [2024-12-06 11:06:09.048198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.284 [2024-12-06 11:06:09.182044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.284 [2024-12-06 11:06:09.214903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:58.284 [2024-12-06 11:06:09.215261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.284 [2024-12-06 11:06:09.215356] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.284 [2024-12-06 11:06:09.215459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.284 [2024-12-06 11:06:09.215653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.284 [2024-12-06 11:06:09.215741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.284 [2024-12-06 11:06:09.216280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.284 [2024-12-06 11:06:09.216289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.284 11:06:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.284 11:06:09 -- common/autotest_common.sh@862 -- # return 0 00:09:58.284 11:06:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:58.284 11:06:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.284 11:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:58.284 11:06:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.284 11:06:09 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.544 [2024-12-06 11:06:09.630211] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.544 11:06:09 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:58.802 Malloc0 00:09:59.060 11:06:09 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:59.060 11:06:10 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.318 11:06:10 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.576 [2024-12-06 11:06:10.658989] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.576 11:06:10 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.835 [2024-12-06 11:06:10.883274] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:59.835 11:06:10 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:00.094 11:06:11 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:00.094 11:06:11 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.094 11:06:11 -- common/autotest_common.sh@1187 -- # local i=0 00:10:00.094 11:06:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.094 11:06:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:00.094 11:06:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:02.626 11:06:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:02.626 11:06:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:02.626 11:06:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.626 11:06:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:02.626 11:06:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.626 11:06:13 -- common/autotest_common.sh@1197 -- # return 0 00:10:02.626 11:06:13 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:02.626 11:06:13 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:02.626 11:06:13 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:02.626 11:06:13 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:02.626 11:06:13 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:02.626 11:06:13 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:02.626 11:06:13 -- target/multipath.sh@38 -- # return 0 00:10:02.626 11:06:13 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:02.626 11:06:13 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:02.626 11:06:13 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:02.626 11:06:13 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:02.626 11:06:13 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:02.626 11:06:13 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:02.626 11:06:13 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:02.626 11:06:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:02.626 11:06:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:02.626 11:06:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.626 11:06:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.626 11:06:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.626 11:06:13 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:02.626 11:06:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:02.626 11:06:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:02.626 11:06:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.626 11:06:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.626 11:06:13 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.626 11:06:13 -- target/multipath.sh@85 -- # echo numa 00:10:02.626 11:06:13 -- target/multipath.sh@88 -- # fio_pid=74268 00:10:02.626 11:06:13 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:02.626 11:06:13 -- target/multipath.sh@90 -- # sleep 1 00:10:02.626 [global] 00:10:02.626 thread=1 00:10:02.626 invalidate=1 00:10:02.626 rw=randrw 00:10:02.626 time_based=1 00:10:02.626 runtime=6 00:10:02.626 ioengine=libaio 00:10:02.626 direct=1 00:10:02.626 bs=4096 00:10:02.626 iodepth=128 00:10:02.626 norandommap=0 00:10:02.626 numjobs=1 00:10:02.626 00:10:02.626 verify_dump=1 00:10:02.626 verify_backlog=512 00:10:02.626 verify_state_save=0 00:10:02.626 do_verify=1 00:10:02.626 verify=crc32c-intel 00:10:02.626 [job0] 00:10:02.626 filename=/dev/nvme0n1 00:10:02.626 Could not set queue depth (nvme0n1) 00:10:02.626 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.626 fio-3.35 00:10:02.626 Starting 1 thread 00:10:03.194 11:06:14 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:03.452 11:06:14 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:03.711 11:06:14 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:03.711 11:06:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:03.711 11:06:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:03.711 11:06:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.711 11:06:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.711 11:06:14 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:03.711 11:06:14 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:03.711 11:06:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:03.711 11:06:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:03.711 11:06:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.711 11:06:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.711 11:06:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:03.711 11:06:14 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:03.969 11:06:15 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:04.226 11:06:15 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:04.226 11:06:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:04.226 11:06:15 -- target/multipath.sh@22 -- # local timeout=20 00:10:04.226 11:06:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.226 11:06:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.226 11:06:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.226 11:06:15 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:04.226 11:06:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:04.226 11:06:15 -- target/multipath.sh@22 -- # local timeout=20 00:10:04.226 11:06:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.226 11:06:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.226 11:06:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.226 11:06:15 -- target/multipath.sh@104 -- # wait 74268 00:10:08.411 00:10:08.411 job0: (groupid=0, jobs=1): err= 0: pid=74295: Fri Dec 6 11:06:19 2024 00:10:08.411 read: IOPS=10.9k, BW=42.5MiB/s (44.6MB/s)(255MiB/6007msec) 00:10:08.411 slat (usec): min=4, max=8262, avg=53.51, stdev=228.36 00:10:08.411 clat (usec): min=1478, max=15796, avg=7939.23, stdev=1455.99 00:10:08.411 lat (usec): min=1486, max=15828, avg=7992.74, stdev=1461.60 00:10:08.411 clat percentiles (usec): 00:10:08.411 | 1.00th=[ 4047], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7046], 00:10:08.411 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:10:08.411 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[11076], 00:10:08.411 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13435], 99.95th=[13698], 00:10:08.411 | 99.99th=[13960] 00:10:08.411 bw ( KiB/s): min= 8168, max=27816, per=52.86%, avg=23000.73, stdev=5482.61, samples=11 00:10:08.411 iops : min= 2042, max= 6954, avg=5750.18, stdev=1370.65, samples=11 00:10:08.411 write: IOPS=6385, BW=24.9MiB/s (26.2MB/s)(137MiB/5494msec); 0 zone resets 00:10:08.411 slat (usec): min=14, max=2703, avg=62.16, stdev=155.19 00:10:08.411 clat (usec): min=2155, max=13539, avg=6933.82, stdev=1248.35 00:10:08.411 lat (usec): min=2186, max=13561, avg=6995.98, stdev=1252.21 00:10:08.411 clat percentiles (usec): 00:10:08.411 | 1.00th=[ 3163], 5.00th=[ 4047], 10.00th=[ 5473], 20.00th=[ 6390], 00:10:08.411 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:10:08.411 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8356], 00:10:08.411 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12125], 99.95th=[12518], 00:10:08.411 | 99.99th=[13304] 00:10:08.411 bw ( KiB/s): min= 8680, max=27336, per=90.10%, avg=23014.55, stdev=5285.91, samples=11 00:10:08.412 iops : min= 2170, max= 6834, avg=5753.64, stdev=1321.48, samples=11 00:10:08.412 lat (msec) : 2=0.01%, 4=2.23%, 10=91.78%, 20=5.99% 00:10:08.412 cpu : usr=5.76%, sys=21.01%, ctx=5789, majf=0, minf=90 00:10:08.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:08.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.412 issued rwts: total=65341,35084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.412 00:10:08.412 Run status group 0 (all jobs): 00:10:08.412 READ: bw=42.5MiB/s (44.6MB/s), 42.5MiB/s-42.5MiB/s (44.6MB/s-44.6MB/s), io=255MiB (268MB), run=6007-6007msec 00:10:08.412 WRITE: bw=24.9MiB/s (26.2MB/s), 24.9MiB/s-24.9MiB/s (26.2MB/s-26.2MB/s), io=137MiB (144MB), run=5494-5494msec 00:10:08.412 00:10:08.412 Disk stats (read/write): 00:10:08.412 nvme0n1: ios=64390/34398, merge=0/0, ticks=489500/224009, in_queue=713509, util=98.53% 00:10:08.412 11:06:19 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:08.670 11:06:19 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:09.236 11:06:20 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:09.236 11:06:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:09.236 11:06:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:09.236 11:06:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:09.236 11:06:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:09.236 11:06:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:09.236 11:06:20 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:09.236 11:06:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:09.236 11:06:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:09.236 11:06:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:09.236 11:06:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:09.236 11:06:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:09.236 11:06:20 -- target/multipath.sh@113 -- # echo round-robin 00:10:09.236 11:06:20 -- target/multipath.sh@116 -- # fio_pid=74371 00:10:09.236 11:06:20 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:09.236 11:06:20 -- target/multipath.sh@118 -- # sleep 1 00:10:09.236 [global] 00:10:09.236 thread=1 00:10:09.236 invalidate=1 00:10:09.236 rw=randrw 00:10:09.236 time_based=1 00:10:09.236 runtime=6 00:10:09.236 ioengine=libaio 00:10:09.236 direct=1 00:10:09.236 bs=4096 00:10:09.236 iodepth=128 00:10:09.236 norandommap=0 00:10:09.236 numjobs=1 00:10:09.236 00:10:09.236 verify_dump=1 00:10:09.236 verify_backlog=512 00:10:09.236 verify_state_save=0 00:10:09.236 do_verify=1 00:10:09.236 verify=crc32c-intel 00:10:09.236 [job0] 00:10:09.236 filename=/dev/nvme0n1 00:10:09.236 Could not set queue depth (nvme0n1) 00:10:09.236 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.236 fio-3.35 00:10:09.237 Starting 1 thread 00:10:10.203 11:06:21 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:10.459 11:06:21 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:10.717 11:06:21 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:10.717 11:06:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:10.717 11:06:21 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.717 11:06:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.717 11:06:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.717 11:06:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:10.717 11:06:21 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:10.717 11:06:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:10.717 11:06:21 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.717 11:06:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.717 11:06:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.717 11:06:21 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.717 11:06:21 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:10.975 11:06:21 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:10.975 11:06:22 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:10.975 11:06:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:10.975 11:06:22 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.975 11:06:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.975 11:06:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.975 11:06:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.975 11:06:22 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:10.975 11:06:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:10.975 11:06:22 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.975 11:06:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.975 11:06:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.975 11:06:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:10.975 11:06:22 -- target/multipath.sh@132 -- # wait 74371 00:10:15.262 00:10:15.262 job0: (groupid=0, jobs=1): err= 0: pid=74392: Fri Dec 6 11:06:26 2024 00:10:15.262 read: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(291MiB/6002msec) 00:10:15.262 slat (usec): min=4, max=6041, avg=40.31, stdev=187.58 00:10:15.262 clat (usec): min=351, max=16693, avg=7117.25, stdev=1802.28 00:10:15.262 lat (usec): min=368, max=16702, avg=7157.56, stdev=1815.37 00:10:15.262 clat percentiles (usec): 00:10:15.262 | 1.00th=[ 2999], 5.00th=[ 3916], 10.00th=[ 4621], 20.00th=[ 5735], 00:10:15.262 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:15.262 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8848], 95.00th=[10290], 00:10:15.262 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13829], 99.95th=[14484], 00:10:15.262 | 99.99th=[16450] 00:10:15.262 bw ( KiB/s): min= 8376, max=41144, per=52.09%, avg=25845.09, stdev=9502.37, samples=11 00:10:15.262 iops : min= 2094, max=10286, avg=6461.27, stdev=2375.59, samples=11 00:10:15.262 write: IOPS=7206, BW=28.1MiB/s (29.5MB/s)(148MiB/5256msec); 0 zone resets 00:10:15.262 slat (usec): min=14, max=2585, avg=52.52, stdev=132.44 00:10:15.262 clat (usec): min=237, max=16474, avg=6128.62, stdev=1707.42 00:10:15.262 lat (usec): min=347, max=16495, avg=6181.14, stdev=1721.32 00:10:15.262 clat percentiles (usec): 00:10:15.262 | 1.00th=[ 2540], 5.00th=[ 3130], 10.00th=[ 3556], 20.00th=[ 4293], 00:10:15.262 | 30.00th=[ 5211], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 6915], 00:10:15.262 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8225], 00:10:15.262 | 99.00th=[10290], 99.50th=[11076], 99.90th=[13829], 99.95th=[14746], 00:10:15.262 | 99.99th=[16057] 00:10:15.262 bw ( KiB/s): min= 8496, max=40416, per=89.64%, avg=25840.00, stdev=9314.42, samples=11 00:10:15.263 iops : min= 2124, max=10104, avg=6460.00, stdev=2328.61, samples=11 00:10:15.263 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.02% 00:10:15.263 lat (msec) : 2=0.19%, 4=8.67%, 10=86.96%, 20=4.11% 00:10:15.263 cpu : usr=6.32%, sys=22.45%, ctx=6240, majf=0, minf=114 00:10:15.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:15.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.263 issued rwts: total=74453,37875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.263 00:10:15.263 Run status group 0 (all jobs): 00:10:15.263 READ: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6002-6002msec 00:10:15.263 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=148MiB (155MB), run=5256-5256msec 00:10:15.263 00:10:15.263 Disk stats (read/write): 00:10:15.263 nvme0n1: ios=72897/37875, merge=0/0, ticks=493402/216110, in_queue=709512, util=98.58% 00:10:15.263 11:06:26 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.521 11:06:26 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.521 11:06:26 -- common/autotest_common.sh@1208 -- # local i=0 00:10:15.521 11:06:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:15.521 11:06:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.521 11:06:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:15.521 11:06:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.521 11:06:26 -- common/autotest_common.sh@1220 -- # return 0 00:10:15.521 11:06:26 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.781 11:06:26 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:15.781 11:06:26 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:15.781 11:06:26 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:15.781 11:06:26 -- target/multipath.sh@144 -- # nvmftestfini 00:10:15.781 11:06:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:15.781 11:06:26 -- nvmf/common.sh@116 -- # sync 00:10:15.781 11:06:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:15.781 11:06:26 -- nvmf/common.sh@119 -- # set +e 00:10:15.781 11:06:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:15.781 11:06:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:15.781 rmmod nvme_tcp 00:10:15.781 rmmod nvme_fabrics 00:10:15.781 rmmod nvme_keyring 00:10:15.781 11:06:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:15.781 11:06:26 -- nvmf/common.sh@123 -- # set -e 00:10:15.781 11:06:26 -- nvmf/common.sh@124 -- # return 0 00:10:15.781 11:06:26 -- nvmf/common.sh@477 -- # '[' -n 74186 ']' 00:10:15.781 11:06:26 -- nvmf/common.sh@478 -- # killprocess 74186 00:10:15.781 11:06:26 -- common/autotest_common.sh@936 -- # '[' -z 74186 ']' 00:10:15.781 11:06:26 -- common/autotest_common.sh@940 -- # kill -0 74186 00:10:15.781 11:06:26 -- common/autotest_common.sh@941 -- # uname 00:10:15.781 11:06:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.781 11:06:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74186 00:10:15.781 11:06:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:15.781 11:06:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:15.781 killing process with pid 74186 00:10:15.781 11:06:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74186' 00:10:15.781 11:06:26 -- common/autotest_common.sh@955 -- # kill 74186 00:10:15.781 11:06:26 -- common/autotest_common.sh@960 -- # wait 74186 00:10:16.040 11:06:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:16.040 11:06:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:16.040 11:06:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:16.040 11:06:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.040 11:06:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:16.040 11:06:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.040 11:06:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.040 11:06:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.040 11:06:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:16.040 00:10:16.040 real 0m18.668s 00:10:16.040 user 1m9.632s 00:10:16.040 sys 0m9.749s 00:10:16.040 11:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.040 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:16.040 ************************************ 00:10:16.040 END TEST nvmf_multipath 00:10:16.040 ************************************ 00:10:16.040 11:06:27 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.040 11:06:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:16.040 11:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.041 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:16.041 ************************************ 00:10:16.041 START TEST nvmf_zcopy 00:10:16.041 ************************************ 00:10:16.041 11:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.300 * Looking for test storage... 00:10:16.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.300 11:06:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:16.300 11:06:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:16.300 11:06:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:16.300 11:06:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:16.300 11:06:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:16.300 11:06:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:16.300 11:06:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:16.300 11:06:27 -- scripts/common.sh@335 -- # IFS=.-: 00:10:16.300 11:06:27 -- scripts/common.sh@335 -- # read -ra ver1 00:10:16.300 11:06:27 -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.300 11:06:27 -- scripts/common.sh@336 -- # read -ra ver2 00:10:16.300 11:06:27 -- scripts/common.sh@337 -- # local 'op=<' 00:10:16.300 11:06:27 -- scripts/common.sh@339 -- # ver1_l=2 00:10:16.300 11:06:27 -- scripts/common.sh@340 -- # ver2_l=1 00:10:16.300 11:06:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:16.300 11:06:27 -- scripts/common.sh@343 -- # case "$op" in 00:10:16.300 11:06:27 -- scripts/common.sh@344 -- # : 1 00:10:16.300 11:06:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:16.300 11:06:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.300 11:06:27 -- scripts/common.sh@364 -- # decimal 1 00:10:16.300 11:06:27 -- scripts/common.sh@352 -- # local d=1 00:10:16.300 11:06:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.300 11:06:27 -- scripts/common.sh@354 -- # echo 1 00:10:16.300 11:06:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:16.300 11:06:27 -- scripts/common.sh@365 -- # decimal 2 00:10:16.300 11:06:27 -- scripts/common.sh@352 -- # local d=2 00:10:16.300 11:06:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.300 11:06:27 -- scripts/common.sh@354 -- # echo 2 00:10:16.300 11:06:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:16.300 11:06:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:16.300 11:06:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:16.300 11:06:27 -- scripts/common.sh@367 -- # return 0 00:10:16.300 11:06:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.300 11:06:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:16.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.300 --rc genhtml_branch_coverage=1 00:10:16.300 --rc genhtml_function_coverage=1 00:10:16.300 --rc genhtml_legend=1 00:10:16.300 --rc geninfo_all_blocks=1 00:10:16.300 --rc geninfo_unexecuted_blocks=1 00:10:16.300 00:10:16.300 ' 00:10:16.300 11:06:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:16.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.300 --rc genhtml_branch_coverage=1 00:10:16.300 --rc genhtml_function_coverage=1 00:10:16.300 --rc genhtml_legend=1 00:10:16.300 --rc geninfo_all_blocks=1 00:10:16.300 --rc geninfo_unexecuted_blocks=1 00:10:16.300 00:10:16.300 ' 00:10:16.300 11:06:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:16.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.300 --rc genhtml_branch_coverage=1 00:10:16.300 --rc genhtml_function_coverage=1 00:10:16.300 --rc genhtml_legend=1 00:10:16.300 --rc geninfo_all_blocks=1 00:10:16.300 --rc geninfo_unexecuted_blocks=1 00:10:16.300 00:10:16.300 ' 00:10:16.300 11:06:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:16.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.300 --rc genhtml_branch_coverage=1 00:10:16.300 --rc genhtml_function_coverage=1 00:10:16.300 --rc genhtml_legend=1 00:10:16.300 --rc geninfo_all_blocks=1 00:10:16.300 --rc geninfo_unexecuted_blocks=1 00:10:16.300 00:10:16.300 ' 00:10:16.300 11:06:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.300 11:06:27 -- nvmf/common.sh@7 -- # uname -s 00:10:16.300 11:06:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.300 11:06:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.300 11:06:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.300 11:06:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.300 11:06:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.300 11:06:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.300 11:06:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.300 11:06:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.300 11:06:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.300 11:06:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.300 11:06:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:16.300 11:06:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:16.300 11:06:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.300 11:06:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.300 11:06:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.300 11:06:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.300 11:06:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.300 11:06:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.300 11:06:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.300 11:06:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.300 11:06:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.301 11:06:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.301 11:06:27 -- paths/export.sh@5 -- # export PATH 00:10:16.301 11:06:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.301 11:06:27 -- nvmf/common.sh@46 -- # : 0 00:10:16.301 11:06:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:16.301 11:06:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:16.301 11:06:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:16.301 11:06:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.301 11:06:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.301 11:06:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:16.301 11:06:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:16.301 11:06:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:16.301 11:06:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.301 11:06:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:16.301 11:06:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.301 11:06:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:16.301 11:06:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:16.301 11:06:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:16.301 11:06:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.301 11:06:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.301 11:06:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.301 11:06:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:16.301 11:06:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:16.301 11:06:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:16.301 11:06:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:16.301 11:06:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:16.301 11:06:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:16.301 11:06:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.301 11:06:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.301 11:06:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.301 11:06:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:16.301 11:06:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.301 11:06:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.301 11:06:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.301 11:06:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.301 11:06:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.301 11:06:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.301 11:06:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.301 11:06:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.301 11:06:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:16.301 11:06:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:16.301 Cannot find device "nvmf_tgt_br" 00:10:16.301 11:06:27 -- nvmf/common.sh@154 -- # true 00:10:16.301 11:06:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.301 Cannot find device "nvmf_tgt_br2" 00:10:16.301 11:06:27 -- nvmf/common.sh@155 -- # true 00:10:16.301 11:06:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:16.301 11:06:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:16.301 Cannot find device "nvmf_tgt_br" 00:10:16.301 11:06:27 -- nvmf/common.sh@157 -- # true 00:10:16.301 11:06:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:16.301 Cannot find device "nvmf_tgt_br2" 00:10:16.301 11:06:27 -- nvmf/common.sh@158 -- # true 00:10:16.301 11:06:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:16.301 11:06:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:16.301 11:06:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.301 11:06:27 -- nvmf/common.sh@161 -- # true 00:10:16.301 11:06:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.559 11:06:27 -- nvmf/common.sh@162 -- # true 00:10:16.559 11:06:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.559 11:06:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.559 11:06:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.559 11:06:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.559 11:06:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.559 11:06:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.559 11:06:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.559 11:06:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.559 11:06:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.559 11:06:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:16.559 11:06:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:16.559 11:06:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:16.559 11:06:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:16.559 11:06:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.559 11:06:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.559 11:06:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.560 11:06:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:16.560 11:06:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:16.560 11:06:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.560 11:06:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.560 11:06:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.560 11:06:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.560 11:06:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.560 11:06:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:16.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:16.560 00:10:16.560 --- 10.0.0.2 ping statistics --- 00:10:16.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.560 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:16.560 11:06:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:16.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:10:16.560 00:10:16.560 --- 10.0.0.3 ping statistics --- 00:10:16.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.560 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:16.560 11:06:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:16.560 00:10:16.560 --- 10.0.0.1 ping statistics --- 00:10:16.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.560 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:16.560 11:06:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.560 11:06:27 -- nvmf/common.sh@421 -- # return 0 00:10:16.560 11:06:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:16.560 11:06:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.560 11:06:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:16.560 11:06:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:16.560 11:06:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.560 11:06:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:16.560 11:06:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:16.560 11:06:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.560 11:06:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:16.560 11:06:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.560 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:16.560 11:06:27 -- nvmf/common.sh@469 -- # nvmfpid=74650 00:10:16.560 11:06:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.560 11:06:27 -- nvmf/common.sh@470 -- # waitforlisten 74650 00:10:16.560 11:06:27 -- common/autotest_common.sh@829 -- # '[' -z 74650 ']' 00:10:16.560 11:06:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.560 11:06:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.560 11:06:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.560 11:06:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.560 11:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:16.560 [2024-12-06 11:06:27.695747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.560 [2024-12-06 11:06:27.695830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.818 [2024-12-06 11:06:27.836716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.818 [2024-12-06 11:06:27.868056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.818 [2024-12-06 11:06:27.868227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.818 [2024-12-06 11:06:27.868239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.818 [2024-12-06 11:06:27.868247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.818 [2024-12-06 11:06:27.868270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.755 11:06:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.755 11:06:28 -- common/autotest_common.sh@862 -- # return 0 00:10:17.755 11:06:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:17.755 11:06:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 11:06:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.755 11:06:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:17.755 11:06:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 [2024-12-06 11:06:28.698819] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 [2024-12-06 11:06:28.714938] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 malloc0 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.755 11:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.755 11:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:17.755 11:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.755 11:06:28 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:17.756 11:06:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:17.756 11:06:28 -- nvmf/common.sh@520 -- # config=() 00:10:17.756 11:06:28 -- nvmf/common.sh@520 -- # local subsystem config 00:10:17.756 11:06:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:17.756 11:06:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:17.756 { 00:10:17.756 "params": { 00:10:17.756 "name": "Nvme$subsystem", 00:10:17.756 "trtype": "$TEST_TRANSPORT", 00:10:17.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.756 "adrfam": "ipv4", 00:10:17.756 "trsvcid": "$NVMF_PORT", 00:10:17.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.756 "hdgst": ${hdgst:-false}, 00:10:17.756 "ddgst": ${ddgst:-false} 00:10:17.756 }, 00:10:17.756 "method": "bdev_nvme_attach_controller" 00:10:17.756 } 00:10:17.756 EOF 00:10:17.756 )") 00:10:17.756 11:06:28 -- nvmf/common.sh@542 -- # cat 00:10:17.756 11:06:28 -- nvmf/common.sh@544 -- # jq . 00:10:17.756 11:06:28 -- nvmf/common.sh@545 -- # IFS=, 00:10:17.756 11:06:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:17.756 "params": { 00:10:17.756 "name": "Nvme1", 00:10:17.756 "trtype": "tcp", 00:10:17.756 "traddr": "10.0.0.2", 00:10:17.756 "adrfam": "ipv4", 00:10:17.756 "trsvcid": "4420", 00:10:17.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.756 "hdgst": false, 00:10:17.756 "ddgst": false 00:10:17.756 }, 00:10:17.756 "method": "bdev_nvme_attach_controller" 00:10:17.756 }' 00:10:17.756 [2024-12-06 11:06:28.796938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:17.756 [2024-12-06 11:06:28.797026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74683 ] 00:10:18.015 [2024-12-06 11:06:28.940058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.015 [2024-12-06 11:06:28.980550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.015 Running I/O for 10 seconds... 00:10:27.992 00:10:27.992 Latency(us) 00:10:27.992 [2024-12-06T11:06:39.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.992 [2024-12-06T11:06:39.139Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:27.992 Verification LBA range: start 0x0 length 0x1000 00:10:27.992 Nvme1n1 : 10.01 10133.42 79.17 0.00 0.00 12599.04 1228.80 19065.02 00:10:27.992 [2024-12-06T11:06:39.139Z] =================================================================================================================== 00:10:27.992 [2024-12-06T11:06:39.139Z] Total : 10133.42 79.17 0.00 0.00 12599.04 1228.80 19065.02 00:10:28.251 11:06:39 -- target/zcopy.sh@39 -- # perfpid=74800 00:10:28.251 11:06:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:28.251 11:06:39 -- common/autotest_common.sh@10 -- # set +x 00:10:28.251 11:06:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:28.251 11:06:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:28.251 11:06:39 -- nvmf/common.sh@520 -- # config=() 00:10:28.251 11:06:39 -- nvmf/common.sh@520 -- # local subsystem config 00:10:28.251 11:06:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:28.251 11:06:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:28.251 { 00:10:28.251 "params": { 00:10:28.251 "name": "Nvme$subsystem", 00:10:28.251 "trtype": "$TEST_TRANSPORT", 00:10:28.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.251 "adrfam": "ipv4", 00:10:28.251 "trsvcid": "$NVMF_PORT", 00:10:28.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.252 "hdgst": ${hdgst:-false}, 00:10:28.252 "ddgst": ${ddgst:-false} 00:10:28.252 }, 00:10:28.252 "method": "bdev_nvme_attach_controller" 00:10:28.252 } 00:10:28.252 EOF 00:10:28.252 )") 00:10:28.252 11:06:39 -- nvmf/common.sh@542 -- # cat 00:10:28.252 [2024-12-06 11:06:39.272236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.272441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 11:06:39 -- nvmf/common.sh@544 -- # jq . 00:10:28.252 11:06:39 -- nvmf/common.sh@545 -- # IFS=, 00:10:28.252 11:06:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:28.252 "params": { 00:10:28.252 "name": "Nvme1", 00:10:28.252 "trtype": "tcp", 00:10:28.252 "traddr": "10.0.0.2", 00:10:28.252 "adrfam": "ipv4", 00:10:28.252 "trsvcid": "4420", 00:10:28.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.252 "hdgst": false, 00:10:28.252 "ddgst": false 00:10:28.252 }, 00:10:28.252 "method": "bdev_nvme_attach_controller" 00:10:28.252 }' 00:10:28.252 [2024-12-06 11:06:39.284227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.284268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.296221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.296505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.308202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.308231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.320227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.320513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.323348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.252 [2024-12-06 11:06:39.323811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74800 ] 00:10:28.252 [2024-12-06 11:06:39.332220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.332249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.344240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.344521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.356211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.356387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.368233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.368392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.380257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.380485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.252 [2024-12-06 11:06:39.392235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.252 [2024-12-06 11:06:39.392435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.404240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.404422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.416239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.416466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.428242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.428425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.440235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.440388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.452235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.452400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.464245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.464433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.468961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.512 [2024-12-06 11:06:39.476281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.476600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.488279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.488505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.500287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.500532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.503417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.512 [2024-12-06 11:06:39.512261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.512292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.524297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.524632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.536294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.536614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.548295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.548337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.560280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.560449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.572298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.572331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.584306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.584335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.596315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.596489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.608343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.608510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.620354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.620524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 [2024-12-06 11:06:39.632360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.632596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.512 Running I/O for 5 seconds... 00:10:28.512 [2024-12-06 11:06:39.648251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.512 [2024-12-06 11:06:39.648528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.665376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.665681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.680796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.681127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.692361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.692714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.708988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.709166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.726755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.726934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.741582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.741790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.758303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.758497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.774080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.774275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.791495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.791727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.806475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.806696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.822157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.822207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.839139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.839173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.856085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.856122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.870971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.871005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.880938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.881121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.895142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.895177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.771 [2024-12-06 11:06:39.906787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.771 [2024-12-06 11:06:39.906821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.921982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.922017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.931380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.931418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.947430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.947466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.964891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.964939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.980408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.980605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:39.997144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:39.997177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.014093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:40.014132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.030517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:40.030579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.048174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:40.048361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.063380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:40.063572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.074625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.031 [2024-12-06 11:06:40.074696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.031 [2024-12-06 11:06:40.091407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.032 [2024-12-06 11:06:40.091445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.032 [2024-12-06 11:06:40.107727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.032 [2024-12-06 11:06:40.107760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.032 [2024-12-06 11:06:40.124435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.032 [2024-12-06 11:06:40.124469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.032 [2024-12-06 11:06:40.140466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.032 [2024-12-06 11:06:40.140499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.032 [2024-12-06 11:06:40.157176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.032 [2024-12-06 11:06:40.157209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.032 [2024-12-06 11:06:40.174555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.174773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.189356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.189391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.206806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.206838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.221798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.221832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.233554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.233767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.249197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.249378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.266344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.266377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.282020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.282053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.293335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.293367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.309820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.309852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.325500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.325533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.343574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.343656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.359061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.359264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.377231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.377264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.391722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.391755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.408184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.408217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.423649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.423711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.292 [2024-12-06 11:06:40.435734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.292 [2024-12-06 11:06:40.435767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.451921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.451954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.468901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.469116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.485249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.485283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.502477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.502514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.517483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.517516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.533808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.533842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.549849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.549884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.567809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.567842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.582294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.582328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.598761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.598796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.616906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.616939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.631455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.631495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.649330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.649365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.663454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.663490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.679469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.679504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.562 [2024-12-06 11:06:40.696601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.562 [2024-12-06 11:06:40.696684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.712416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.712467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.730773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.730836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.745877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.746214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.755169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.755220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.770505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.770603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.787182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.787234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.802676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.802723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.820261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.820569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.836740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.836776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.854128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.854164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.868798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.868835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.836 [2024-12-06 11:06:40.884490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.836 [2024-12-06 11:06:40.884692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.894723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.894759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.910379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.910412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.928148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.928339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.943807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.943842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.962145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.962324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.837 [2024-12-06 11:06:40.976233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.837 [2024-12-06 11:06:40.976411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:40.992028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:40.992062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.009974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.010005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.026076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.026108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.042998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.043175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.060377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.060700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.076916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.077230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.093594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.093915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.110098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.110396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.126686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.126850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.142491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.142718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.154010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.154187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.170493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.170717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.186660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.186837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.203939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.204132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.220183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.220386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.096 [2024-12-06 11:06:41.237365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.096 [2024-12-06 11:06:41.237558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.252166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.252348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.268127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.268306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.285482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.285707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.301661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.301839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.319227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.319259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.334641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.334675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.352671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.352704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.368948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.368998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.386023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.386056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.401029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.401062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.417221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.417254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.433393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.433426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.451355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.451391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.467316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.467351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.356 [2024-12-06 11:06:41.485129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.356 [2024-12-06 11:06:41.485162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.501380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.501415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.518881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.518930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.533851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.533884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.549003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.549036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.566341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.566375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.582306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.582339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.598508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.598584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.616439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.616668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.630327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.630361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.645570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.645628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.662400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.662465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.616 [2024-12-06 11:06:41.681680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.616 [2024-12-06 11:06:41.681714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.617 [2024-12-06 11:06:41.696320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.617 [2024-12-06 11:06:41.696507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.617 [2024-12-06 11:06:41.708125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.617 [2024-12-06 11:06:41.708159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.617 [2024-12-06 11:06:41.723978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.617 [2024-12-06 11:06:41.724012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.617 [2024-12-06 11:06:41.739777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.617 [2024-12-06 11:06:41.739809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.617 [2024-12-06 11:06:41.758164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.617 [2024-12-06 11:06:41.758345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.875 [2024-12-06 11:06:41.772154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.875 [2024-12-06 11:06:41.772188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.875 [2024-12-06 11:06:41.788987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.875 [2024-12-06 11:06:41.789182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.875 [2024-12-06 11:06:41.804940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.875 [2024-12-06 11:06:41.804972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.875 [2024-12-06 11:06:41.822382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.875 [2024-12-06 11:06:41.822415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.838872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.838920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.856730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.856764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.871571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.871618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.888286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.888320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.904352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.904383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.920993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.921026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.937622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.937655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.954799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.954964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.969763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.969956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:41.985531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:41.985591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:42.004078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:42.004113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.876 [2024-12-06 11:06:42.018819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.876 [2024-12-06 11:06:42.018856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.035810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.035844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.052153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.052190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.070664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.070697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.084270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.084321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.092524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.092616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.104452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.104485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.121082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.121280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.136898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.137093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.146250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.146595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.163359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.163722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.180132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.180436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.196321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.196566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.213725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.214026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.229587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.229921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.247278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.247446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.135 [2024-12-06 11:06:42.262820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.135 [2024-12-06 11:06:42.263019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.281168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.281368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.297903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.298124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.314472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.314680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.332738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.332952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.348134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.348308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.365914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.365962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.381662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.381695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.398827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.399007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.415277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.415332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.432220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.432252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.448968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.449000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.465606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.465638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.481686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.481718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.499018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.499052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.514145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.514177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.395 [2024-12-06 11:06:42.531477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.395 [2024-12-06 11:06:42.531525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.546063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.546111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.561582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.561649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.578501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.578590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.595092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.595133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.611219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.611256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.627737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.627771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.644992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.645031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.662891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.663081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.678866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.678901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.688198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.688232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.703701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.703734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.716113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.716149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.732903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.732937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.747182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.747220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.764858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.764893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.779307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.779347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.655 [2024-12-06 11:06:42.794823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.655 [2024-12-06 11:06:42.794860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.812092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.812129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.826664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.826700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.843261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.843323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.859050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.859084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.871239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.871299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.888184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.888220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.903386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.903424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.912499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.912565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.929525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.929588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.945771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.945808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.963071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.963106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.980758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.980793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.915 [2024-12-06 11:06:42.996405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.915 [2024-12-06 11:06:42.996618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.916 [2024-12-06 11:06:43.014963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.916 [2024-12-06 11:06:43.014997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.916 [2024-12-06 11:06:43.028870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.916 [2024-12-06 11:06:43.028906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.916 [2024-12-06 11:06:43.044647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.916 [2024-12-06 11:06:43.044680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.064208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.064400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.079400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.079564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.097320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.097494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.112266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.112453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.128284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.128449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.145689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.145952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.161849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.162029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.178702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.178739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.195917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.195968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.212757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.212791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.228232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.228266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.237206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.237241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.253190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.253225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.263001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.263035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.278660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.278695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.295573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.295610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.176 [2024-12-06 11:06:43.311854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.176 [2024-12-06 11:06:43.311903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.328004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.328039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.345785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.345822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.361609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.361643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.380238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.380273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.395106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.395141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.413376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.413413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.428047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.428242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.443821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.443857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.462412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.462447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.476395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.476431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.491674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.491710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.502873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.503076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.519880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.519915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.534504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.534713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.550591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.550625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.436 [2024-12-06 11:06:43.568910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.436 [2024-12-06 11:06:43.568960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.584392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.584458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.602517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.602738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.617334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.617392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.634741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.634804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.648663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.648723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.663988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.664032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.681708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.681766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.698210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.698264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.714584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.714638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.730004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.730052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.746795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.746830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.763746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.763778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.780426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.780475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.798524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.798599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.814361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.814393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-12-06 11:06:43.831398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-12-06 11:06:43.831434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.846319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.846354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.862531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.862622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.880253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.880437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.894600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.894663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.910915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.911126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.927158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.927210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.944817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.944858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.960843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.960901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.977515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.977637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:43.995276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:43.995338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.009888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.010110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.025926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.025961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.042417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.042452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.059034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.059069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.075380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.075417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.956 [2024-12-06 11:06:44.093543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.956 [2024-12-06 11:06:44.093768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.108609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.108654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.118386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.118420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.134266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.134301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.143992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.144181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.158452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.158486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.173933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.173968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.193162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.193200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.208066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.208101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.227142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.227180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.242221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.242396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.259237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.259281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.277152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.277321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.291880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.292087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.308669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.308705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.324720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.324754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.341837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.342043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.216 [2024-12-06 11:06:44.357292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.216 [2024-12-06 11:06:44.357467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-12-06 11:06:44.366950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.366984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.382695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.382730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.401252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.401287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.415440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.415477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.431039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.431073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.447484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.447521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.463754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.463788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.481765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.481801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.496575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.496642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.512178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.512214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.530742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.530776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.544394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.544430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.560686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.560720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.577385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.577422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.593778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.593811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.476 [2024-12-06 11:06:44.609953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.476 [2024-12-06 11:06:44.609988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.626136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.626172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.642439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.642491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 00:10:33.735 Latency(us) 00:10:33.735 [2024-12-06T11:06:44.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.735 [2024-12-06T11:06:44.882Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:33.735 Nvme1n1 : 5.01 12552.15 98.06 0.00 0.00 10184.24 3813.00 20971.52 00:10:33.735 [2024-12-06T11:06:44.882Z] =================================================================================================================== 00:10:33.735 [2024-12-06T11:06:44.882Z] Total : 12552.15 98.06 0.00 0.00 10184.24 3813.00 20971.52 00:10:33.735 [2024-12-06 11:06:44.651341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.651379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.663350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.663605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.675389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.675653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.687382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.687653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.699400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.699711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.711405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.711710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.723387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.723618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.735406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.735687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.747393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.747586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.759406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.759658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.771382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.771525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 [2024-12-06 11:06:44.783400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.735 [2024-12-06 11:06:44.783564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.735 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74800) - No such process 00:10:33.735 11:06:44 -- target/zcopy.sh@49 -- # wait 74800 00:10:33.735 11:06:44 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.735 11:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 11:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 11:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.735 11:06:44 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.735 11:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 11:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 delay0 00:10:33.735 11:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.735 11:06:44 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:33.735 11:06:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 11:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 11:06:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.736 11:06:44 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:33.995 [2024-12-06 11:06:44.984932] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:40.559 Initializing NVMe Controllers 00:10:40.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.559 Initialization complete. Launching workers. 00:10:40.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 125 00:10:40.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 412, failed to submit 33 00:10:40.559 success 294, unsuccess 118, failed 0 00:10:40.559 11:06:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:40.559 11:06:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:40.559 11:06:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:40.559 11:06:51 -- nvmf/common.sh@116 -- # sync 00:10:40.559 11:06:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:40.559 11:06:51 -- nvmf/common.sh@119 -- # set +e 00:10:40.559 11:06:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:40.559 11:06:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:40.559 rmmod nvme_tcp 00:10:40.559 rmmod nvme_fabrics 00:10:40.559 rmmod nvme_keyring 00:10:40.559 11:06:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:40.559 11:06:51 -- nvmf/common.sh@123 -- # set -e 00:10:40.559 11:06:51 -- nvmf/common.sh@124 -- # return 0 00:10:40.559 11:06:51 -- nvmf/common.sh@477 -- # '[' -n 74650 ']' 00:10:40.559 11:06:51 -- nvmf/common.sh@478 -- # killprocess 74650 00:10:40.559 11:06:51 -- common/autotest_common.sh@936 -- # '[' -z 74650 ']' 00:10:40.559 11:06:51 -- common/autotest_common.sh@940 -- # kill -0 74650 00:10:40.559 11:06:51 -- common/autotest_common.sh@941 -- # uname 00:10:40.559 11:06:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:40.559 11:06:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74650 00:10:40.559 killing process with pid 74650 00:10:40.559 11:06:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:40.559 11:06:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:40.559 11:06:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74650' 00:10:40.559 11:06:51 -- common/autotest_common.sh@955 -- # kill 74650 00:10:40.559 11:06:51 -- common/autotest_common.sh@960 -- # wait 74650 00:10:40.559 11:06:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:40.559 11:06:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:40.559 11:06:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:40.559 11:06:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.559 11:06:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:40.559 11:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.559 11:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.559 11:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.559 11:06:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:40.559 00:10:40.559 real 0m24.257s 00:10:40.559 user 0m39.920s 00:10:40.559 sys 0m6.394s 00:10:40.559 ************************************ 00:10:40.560 END TEST nvmf_zcopy 00:10:40.560 ************************************ 00:10:40.560 11:06:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:40.560 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:40.560 11:06:51 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:40.560 11:06:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.560 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:40.560 ************************************ 00:10:40.560 START TEST nvmf_nmic 00:10:40.560 ************************************ 00:10:40.560 11:06:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:40.560 * Looking for test storage... 00:10:40.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.560 11:06:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:40.560 11:06:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:40.560 11:06:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:40.560 11:06:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:40.560 11:06:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:40.560 11:06:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:40.560 11:06:51 -- scripts/common.sh@335 -- # IFS=.-: 00:10:40.560 11:06:51 -- scripts/common.sh@335 -- # read -ra ver1 00:10:40.560 11:06:51 -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.560 11:06:51 -- scripts/common.sh@336 -- # read -ra ver2 00:10:40.560 11:06:51 -- scripts/common.sh@337 -- # local 'op=<' 00:10:40.560 11:06:51 -- scripts/common.sh@339 -- # ver1_l=2 00:10:40.560 11:06:51 -- scripts/common.sh@340 -- # ver2_l=1 00:10:40.560 11:06:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:40.560 11:06:51 -- scripts/common.sh@343 -- # case "$op" in 00:10:40.560 11:06:51 -- scripts/common.sh@344 -- # : 1 00:10:40.560 11:06:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:40.560 11:06:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.560 11:06:51 -- scripts/common.sh@364 -- # decimal 1 00:10:40.560 11:06:51 -- scripts/common.sh@352 -- # local d=1 00:10:40.560 11:06:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.560 11:06:51 -- scripts/common.sh@354 -- # echo 1 00:10:40.560 11:06:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:40.560 11:06:51 -- scripts/common.sh@365 -- # decimal 2 00:10:40.560 11:06:51 -- scripts/common.sh@352 -- # local d=2 00:10:40.560 11:06:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.560 11:06:51 -- scripts/common.sh@354 -- # echo 2 00:10:40.560 11:06:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:40.560 11:06:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:40.560 11:06:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:40.560 11:06:51 -- scripts/common.sh@367 -- # return 0 00:10:40.560 11:06:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.560 --rc genhtml_branch_coverage=1 00:10:40.560 --rc genhtml_function_coverage=1 00:10:40.560 --rc genhtml_legend=1 00:10:40.560 --rc geninfo_all_blocks=1 00:10:40.560 --rc geninfo_unexecuted_blocks=1 00:10:40.560 00:10:40.560 ' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.560 --rc genhtml_branch_coverage=1 00:10:40.560 --rc genhtml_function_coverage=1 00:10:40.560 --rc genhtml_legend=1 00:10:40.560 --rc geninfo_all_blocks=1 00:10:40.560 --rc geninfo_unexecuted_blocks=1 00:10:40.560 00:10:40.560 ' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.560 --rc genhtml_branch_coverage=1 00:10:40.560 --rc genhtml_function_coverage=1 00:10:40.560 --rc genhtml_legend=1 00:10:40.560 --rc geninfo_all_blocks=1 00:10:40.560 --rc geninfo_unexecuted_blocks=1 00:10:40.560 00:10:40.560 ' 00:10:40.560 11:06:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.560 --rc genhtml_branch_coverage=1 00:10:40.560 --rc genhtml_function_coverage=1 00:10:40.560 --rc genhtml_legend=1 00:10:40.560 --rc geninfo_all_blocks=1 00:10:40.560 --rc geninfo_unexecuted_blocks=1 00:10:40.560 00:10:40.560 ' 00:10:40.560 11:06:51 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.560 11:06:51 -- nvmf/common.sh@7 -- # uname -s 00:10:40.560 11:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.560 11:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.560 11:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.560 11:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.560 11:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.560 11:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.560 11:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.560 11:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.560 11:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.560 11:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:40.560 11:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:40.560 11:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.560 11:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.560 11:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.560 11:06:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.560 11:06:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.560 11:06:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.560 11:06:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.560 11:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.560 11:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.560 11:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.560 11:06:51 -- paths/export.sh@5 -- # export PATH 00:10:40.560 11:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.560 11:06:51 -- nvmf/common.sh@46 -- # : 0 00:10:40.560 11:06:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:40.560 11:06:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:40.560 11:06:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:40.560 11:06:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.560 11:06:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.560 11:06:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:40.560 11:06:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:40.560 11:06:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:40.560 11:06:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.560 11:06:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.560 11:06:51 -- target/nmic.sh@14 -- # nvmftestinit 00:10:40.560 11:06:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:40.560 11:06:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.560 11:06:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:40.560 11:06:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:40.560 11:06:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:40.560 11:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.560 11:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.560 11:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.560 11:06:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:40.560 11:06:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:40.560 11:06:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.560 11:06:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.560 11:06:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.560 11:06:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:40.560 11:06:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.560 11:06:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.560 11:06:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.560 11:06:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.560 11:06:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.560 11:06:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.560 11:06:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.560 11:06:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.560 11:06:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:40.561 11:06:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:40.561 Cannot find device "nvmf_tgt_br" 00:10:40.561 11:06:51 -- nvmf/common.sh@154 -- # true 00:10:40.561 11:06:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.561 Cannot find device "nvmf_tgt_br2" 00:10:40.561 11:06:51 -- nvmf/common.sh@155 -- # true 00:10:40.561 11:06:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:40.561 11:06:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:40.561 Cannot find device "nvmf_tgt_br" 00:10:40.561 11:06:51 -- nvmf/common.sh@157 -- # true 00:10:40.561 11:06:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:40.820 Cannot find device "nvmf_tgt_br2" 00:10:40.820 11:06:51 -- nvmf/common.sh@158 -- # true 00:10:40.820 11:06:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:40.820 11:06:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:40.820 11:06:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.820 11:06:51 -- nvmf/common.sh@161 -- # true 00:10:40.820 11:06:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.820 11:06:51 -- nvmf/common.sh@162 -- # true 00:10:40.820 11:06:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.820 11:06:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.820 11:06:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.820 11:06:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.820 11:06:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.820 11:06:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.820 11:06:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.820 11:06:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.820 11:06:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.820 11:06:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:40.820 11:06:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:40.820 11:06:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:40.820 11:06:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:40.820 11:06:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.820 11:06:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.820 11:06:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.820 11:06:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:40.820 11:06:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:40.820 11:06:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.820 11:06:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.820 11:06:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.821 11:06:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.821 11:06:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.821 11:06:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:40.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:40.821 00:10:40.821 --- 10.0.0.2 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:40.821 11:06:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:40.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:10:40.821 00:10:40.821 --- 10.0.0.3 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:40.821 11:06:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:40.821 00:10:40.821 --- 10.0.0.1 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:40.821 11:06:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.821 11:06:51 -- nvmf/common.sh@421 -- # return 0 00:10:40.821 11:06:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:40.821 11:06:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.821 11:06:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:40.821 11:06:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:40.821 11:06:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.821 11:06:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:40.821 11:06:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:40.821 11:06:51 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:40.821 11:06:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:40.821 11:06:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.821 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:41.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.080 11:06:51 -- nvmf/common.sh@469 -- # nvmfpid=75134 00:10:41.080 11:06:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.080 11:06:51 -- nvmf/common.sh@470 -- # waitforlisten 75134 00:10:41.080 11:06:51 -- common/autotest_common.sh@829 -- # '[' -z 75134 ']' 00:10:41.080 11:06:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.080 11:06:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.080 11:06:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.080 11:06:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.080 11:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:41.080 [2024-12-06 11:06:52.015829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.080 [2024-12-06 11:06:52.016123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.080 [2024-12-06 11:06:52.155024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.080 [2024-12-06 11:06:52.189299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:41.080 [2024-12-06 11:06:52.189722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.080 [2024-12-06 11:06:52.189775] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.080 [2024-12-06 11:06:52.189902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.080 [2024-12-06 11:06:52.190105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.080 [2024-12-06 11:06:52.190270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.080 [2024-12-06 11:06:52.191343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.080 [2024-12-06 11:06:52.191348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.019 11:06:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.019 11:06:53 -- common/autotest_common.sh@862 -- # return 0 00:10:42.019 11:06:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:42.019 11:06:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:06:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.019 11:06:53 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 [2024-12-06 11:06:53.068811] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 Malloc0 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 [2024-12-06 11:06:53.130935] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.019 test case1: single bdev can't be used in multiple subsystems 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:42.019 11:06:53 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@28 -- # nmic_status=0 00:10:42.019 11:06:53 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.019 [2024-12-06 11:06:53.154750] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:42.019 [2024-12-06 11:06:53.154790] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:42.019 [2024-12-06 11:06:53.154819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.019 request: 00:10:42.019 { 00:10:42.019 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:42.019 "namespace": { 00:10:42.019 "bdev_name": "Malloc0" 00:10:42.019 }, 00:10:42.019 "method": "nvmf_subsystem_add_ns", 00:10:42.019 "req_id": 1 00:10:42.019 } 00:10:42.019 Got JSON-RPC error response 00:10:42.019 response: 00:10:42.019 { 00:10:42.019 "code": -32602, 00:10:42.019 "message": "Invalid parameters" 00:10:42.019 } 00:10:42.019 Adding namespace failed - expected result. 00:10:42.019 test case2: host connect to nvmf target in multiple paths 00:10:42.019 11:06:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:42.019 11:06:53 -- target/nmic.sh@29 -- # nmic_status=1 00:10:42.019 11:06:53 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:42.019 11:06:53 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:42.019 11:06:53 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:42.019 11:06:53 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:42.019 11:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.019 11:06:53 -- common/autotest_common.sh@10 -- # set +x 00:10:42.279 [2024-12-06 11:06:53.166924] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:42.279 11:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.279 11:06:53 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.279 11:06:53 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:42.538 11:06:53 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.538 11:06:53 -- common/autotest_common.sh@1187 -- # local i=0 00:10:42.538 11:06:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.538 11:06:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:42.538 11:06:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:44.441 11:06:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:44.441 11:06:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:44.441 11:06:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.441 11:06:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:44.441 11:06:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.441 11:06:55 -- common/autotest_common.sh@1197 -- # return 0 00:10:44.441 11:06:55 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.441 [global] 00:10:44.441 thread=1 00:10:44.441 invalidate=1 00:10:44.441 rw=write 00:10:44.441 time_based=1 00:10:44.441 runtime=1 00:10:44.441 ioengine=libaio 00:10:44.442 direct=1 00:10:44.442 bs=4096 00:10:44.442 iodepth=1 00:10:44.442 norandommap=0 00:10:44.442 numjobs=1 00:10:44.442 00:10:44.442 verify_dump=1 00:10:44.442 verify_backlog=512 00:10:44.442 verify_state_save=0 00:10:44.442 do_verify=1 00:10:44.442 verify=crc32c-intel 00:10:44.442 [job0] 00:10:44.442 filename=/dev/nvme0n1 00:10:44.442 Could not set queue depth (nvme0n1) 00:10:44.707 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.707 fio-3.35 00:10:44.707 Starting 1 thread 00:10:45.661 00:10:45.661 job0: (groupid=0, jobs=1): err= 0: pid=75226: Fri Dec 6 11:06:56 2024 00:10:45.661 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:10:45.661 slat (nsec): min=10753, max=59444, avg=13525.28, stdev=4021.14 00:10:45.661 clat (usec): min=129, max=715, avg=179.26, stdev=28.64 00:10:45.661 lat (usec): min=140, max=728, avg=192.79, stdev=29.04 00:10:45.661 clat percentiles (usec): 00:10:45.661 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 159], 00:10:45.661 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:45.661 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 227], 00:10:45.661 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 338], 99.95th=[ 562], 00:10:45.661 | 99.99th=[ 717] 00:10:45.661 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.661 slat (usec): min=13, max=101, avg=21.62, stdev= 6.79 00:10:45.661 clat (usec): min=78, max=246, avg=109.15, stdev=20.26 00:10:45.661 lat (usec): min=94, max=337, avg=130.76, stdev=22.06 00:10:45.661 clat percentiles (usec): 00:10:45.661 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 94], 00:10:45.661 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 110], 00:10:45.661 | 70.00th=[ 116], 80.00th=[ 124], 90.00th=[ 137], 95.00th=[ 149], 00:10:45.661 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 227], 99.95th=[ 245], 00:10:45.661 | 99.99th=[ 247] 00:10:45.661 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.661 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.661 lat (usec) : 100=19.58%, 250=79.36%, 500=1.03%, 750=0.03% 00:10:45.661 cpu : usr=2.50%, sys=8.10%, ctx=6124, majf=0, minf=5 00:10:45.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.661 issued rwts: total=3051,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.661 00:10:45.661 Run status group 0 (all jobs): 00:10:45.661 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:10:45.661 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:45.661 00:10:45.661 Disk stats (read/write): 00:10:45.661 nvme0n1: ios=2610/2976, merge=0/0, ticks=508/382, in_queue=890, util=91.28% 00:10:45.661 11:06:56 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.921 11:06:56 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.921 11:06:56 -- common/autotest_common.sh@1208 -- # local i=0 00:10:45.921 11:06:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:45.921 11:06:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.921 11:06:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:45.921 11:06:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.921 11:06:56 -- common/autotest_common.sh@1220 -- # return 0 00:10:45.921 11:06:56 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.921 11:06:56 -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.921 11:06:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:45.921 11:06:56 -- nvmf/common.sh@116 -- # sync 00:10:45.921 11:06:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:45.921 11:06:56 -- nvmf/common.sh@119 -- # set +e 00:10:45.921 11:06:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:45.921 11:06:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:45.921 rmmod nvme_tcp 00:10:45.921 rmmod nvme_fabrics 00:10:45.921 rmmod nvme_keyring 00:10:45.921 11:06:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:45.921 11:06:56 -- nvmf/common.sh@123 -- # set -e 00:10:45.921 11:06:56 -- nvmf/common.sh@124 -- # return 0 00:10:45.921 11:06:56 -- nvmf/common.sh@477 -- # '[' -n 75134 ']' 00:10:45.921 11:06:56 -- nvmf/common.sh@478 -- # killprocess 75134 00:10:45.921 11:06:56 -- common/autotest_common.sh@936 -- # '[' -z 75134 ']' 00:10:45.921 11:06:56 -- common/autotest_common.sh@940 -- # kill -0 75134 00:10:45.921 11:06:56 -- common/autotest_common.sh@941 -- # uname 00:10:45.921 11:06:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.921 11:06:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75134 00:10:45.921 11:06:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:45.921 11:06:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:45.921 killing process with pid 75134 00:10:45.921 11:06:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75134' 00:10:45.921 11:06:56 -- common/autotest_common.sh@955 -- # kill 75134 00:10:45.921 11:06:56 -- common/autotest_common.sh@960 -- # wait 75134 00:10:46.181 11:06:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:46.181 11:06:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:46.181 11:06:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:46.181 11:06:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.181 11:06:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:46.181 11:06:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.181 11:06:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.181 11:06:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.181 11:06:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:46.181 00:10:46.181 real 0m5.742s 00:10:46.181 user 0m18.682s 00:10:46.181 sys 0m2.178s 00:10:46.181 11:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.181 11:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.181 ************************************ 00:10:46.181 END TEST nvmf_nmic 00:10:46.181 ************************************ 00:10:46.181 11:06:57 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.181 11:06:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:46.181 11:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.181 11:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.181 ************************************ 00:10:46.181 START TEST nvmf_fio_target 00:10:46.181 ************************************ 00:10:46.181 11:06:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.181 * Looking for test storage... 00:10:46.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.181 11:06:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:46.181 11:06:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:46.181 11:06:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:46.441 11:06:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:46.441 11:06:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:46.441 11:06:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:46.441 11:06:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:46.441 11:06:57 -- scripts/common.sh@335 -- # IFS=.-: 00:10:46.441 11:06:57 -- scripts/common.sh@335 -- # read -ra ver1 00:10:46.441 11:06:57 -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.441 11:06:57 -- scripts/common.sh@336 -- # read -ra ver2 00:10:46.441 11:06:57 -- scripts/common.sh@337 -- # local 'op=<' 00:10:46.441 11:06:57 -- scripts/common.sh@339 -- # ver1_l=2 00:10:46.441 11:06:57 -- scripts/common.sh@340 -- # ver2_l=1 00:10:46.441 11:06:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:46.441 11:06:57 -- scripts/common.sh@343 -- # case "$op" in 00:10:46.441 11:06:57 -- scripts/common.sh@344 -- # : 1 00:10:46.441 11:06:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:46.441 11:06:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.441 11:06:57 -- scripts/common.sh@364 -- # decimal 1 00:10:46.441 11:06:57 -- scripts/common.sh@352 -- # local d=1 00:10:46.441 11:06:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.441 11:06:57 -- scripts/common.sh@354 -- # echo 1 00:10:46.441 11:06:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:46.441 11:06:57 -- scripts/common.sh@365 -- # decimal 2 00:10:46.441 11:06:57 -- scripts/common.sh@352 -- # local d=2 00:10:46.441 11:06:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.441 11:06:57 -- scripts/common.sh@354 -- # echo 2 00:10:46.441 11:06:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:46.441 11:06:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:46.441 11:06:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:46.441 11:06:57 -- scripts/common.sh@367 -- # return 0 00:10:46.441 11:06:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.441 11:06:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.441 --rc genhtml_branch_coverage=1 00:10:46.441 --rc genhtml_function_coverage=1 00:10:46.441 --rc genhtml_legend=1 00:10:46.441 --rc geninfo_all_blocks=1 00:10:46.441 --rc geninfo_unexecuted_blocks=1 00:10:46.441 00:10:46.441 ' 00:10:46.441 11:06:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.441 --rc genhtml_branch_coverage=1 00:10:46.441 --rc genhtml_function_coverage=1 00:10:46.441 --rc genhtml_legend=1 00:10:46.441 --rc geninfo_all_blocks=1 00:10:46.441 --rc geninfo_unexecuted_blocks=1 00:10:46.441 00:10:46.441 ' 00:10:46.441 11:06:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.441 --rc genhtml_branch_coverage=1 00:10:46.441 --rc genhtml_function_coverage=1 00:10:46.441 --rc genhtml_legend=1 00:10:46.441 --rc geninfo_all_blocks=1 00:10:46.441 --rc geninfo_unexecuted_blocks=1 00:10:46.441 00:10:46.441 ' 00:10:46.441 11:06:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:46.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.441 --rc genhtml_branch_coverage=1 00:10:46.441 --rc genhtml_function_coverage=1 00:10:46.441 --rc genhtml_legend=1 00:10:46.441 --rc geninfo_all_blocks=1 00:10:46.441 --rc geninfo_unexecuted_blocks=1 00:10:46.441 00:10:46.441 ' 00:10:46.441 11:06:57 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.441 11:06:57 -- nvmf/common.sh@7 -- # uname -s 00:10:46.441 11:06:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.441 11:06:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.441 11:06:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.441 11:06:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.441 11:06:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.441 11:06:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.441 11:06:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.441 11:06:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.441 11:06:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.441 11:06:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.441 11:06:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:46.441 11:06:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:10:46.441 11:06:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.441 11:06:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.441 11:06:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.441 11:06:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.441 11:06:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.441 11:06:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.441 11:06:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.441 11:06:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.441 11:06:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.441 11:06:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.441 11:06:57 -- paths/export.sh@5 -- # export PATH 00:10:46.441 11:06:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.441 11:06:57 -- nvmf/common.sh@46 -- # : 0 00:10:46.441 11:06:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:46.441 11:06:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:46.441 11:06:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:46.441 11:06:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.441 11:06:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.441 11:06:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:46.441 11:06:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:46.441 11:06:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:46.441 11:06:57 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.441 11:06:57 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.441 11:06:57 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.441 11:06:57 -- target/fio.sh@16 -- # nvmftestinit 00:10:46.441 11:06:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:46.442 11:06:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.442 11:06:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:46.442 11:06:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:46.442 11:06:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:46.442 11:06:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.442 11:06:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.442 11:06:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.442 11:06:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:46.442 11:06:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:46.442 11:06:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:46.442 11:06:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:46.442 11:06:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:46.442 11:06:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:46.442 11:06:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.442 11:06:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.442 11:06:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.442 11:06:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:46.442 11:06:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.442 11:06:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.442 11:06:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.442 11:06:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.442 11:06:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.442 11:06:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.442 11:06:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.442 11:06:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.442 11:06:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:46.442 11:06:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:46.442 Cannot find device "nvmf_tgt_br" 00:10:46.442 11:06:57 -- nvmf/common.sh@154 -- # true 00:10:46.442 11:06:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.442 Cannot find device "nvmf_tgt_br2" 00:10:46.442 11:06:57 -- nvmf/common.sh@155 -- # true 00:10:46.442 11:06:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:46.442 11:06:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:46.442 Cannot find device "nvmf_tgt_br" 00:10:46.442 11:06:57 -- nvmf/common.sh@157 -- # true 00:10:46.442 11:06:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:46.442 Cannot find device "nvmf_tgt_br2" 00:10:46.442 11:06:57 -- nvmf/common.sh@158 -- # true 00:10:46.442 11:06:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:46.442 11:06:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:46.442 11:06:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.442 11:06:57 -- nvmf/common.sh@161 -- # true 00:10:46.442 11:06:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.701 11:06:57 -- nvmf/common.sh@162 -- # true 00:10:46.701 11:06:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.701 11:06:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.701 11:06:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.701 11:06:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.701 11:06:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.701 11:06:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.701 11:06:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.701 11:06:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.701 11:06:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.701 11:06:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:46.701 11:06:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:46.701 11:06:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:46.701 11:06:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:46.701 11:06:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.701 11:06:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.701 11:06:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.701 11:06:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:46.701 11:06:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:46.701 11:06:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.701 11:06:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.701 11:06:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.701 11:06:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.701 11:06:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.701 11:06:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:46.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:10:46.701 00:10:46.701 --- 10.0.0.2 ping statistics --- 00:10:46.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.701 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:46.701 11:06:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:46.701 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.701 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:46.701 00:10:46.701 --- 10.0.0.3 ping statistics --- 00:10:46.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.701 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:46.701 11:06:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:10:46.701 00:10:46.701 --- 10.0.0.1 ping statistics --- 00:10:46.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.701 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:46.701 11:06:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.701 11:06:57 -- nvmf/common.sh@421 -- # return 0 00:10:46.701 11:06:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:46.701 11:06:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.701 11:06:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:46.701 11:06:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:46.701 11:06:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.701 11:06:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:46.701 11:06:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:46.961 11:06:57 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:46.961 11:06:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:46.961 11:06:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:46.961 11:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.961 11:06:57 -- nvmf/common.sh@469 -- # nvmfpid=75410 00:10:46.961 11:06:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.961 11:06:57 -- nvmf/common.sh@470 -- # waitforlisten 75410 00:10:46.961 11:06:57 -- common/autotest_common.sh@829 -- # '[' -z 75410 ']' 00:10:46.961 11:06:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.961 11:06:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.961 11:06:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.961 11:06:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.961 11:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.961 [2024-12-06 11:06:57.919285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.961 [2024-12-06 11:06:57.919430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.961 [2024-12-06 11:06:58.061837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.961 [2024-12-06 11:06:58.094683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:46.961 [2024-12-06 11:06:58.094843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.961 [2024-12-06 11:06:58.094857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.961 [2024-12-06 11:06:58.094865] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.961 [2024-12-06 11:06:58.094986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.961 [2024-12-06 11:06:58.095139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.961 [2024-12-06 11:06:58.095623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.961 [2024-12-06 11:06:58.095634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.896 11:06:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.896 11:06:58 -- common/autotest_common.sh@862 -- # return 0 00:10:47.896 11:06:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:47.896 11:06:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:47.896 11:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:47.896 11:06:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.896 11:06:58 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.154 [2024-12-06 11:06:59.168618] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.155 11:06:59 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.412 11:06:59 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:48.412 11:06:59 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.670 11:06:59 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:48.670 11:06:59 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.928 11:06:59 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:48.928 11:07:00 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.186 11:07:00 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:49.186 11:07:00 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:49.444 11:07:00 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.703 11:07:00 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:49.703 11:07:00 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.961 11:07:01 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:49.961 11:07:01 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.219 11:07:01 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:50.219 11:07:01 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:50.478 11:07:01 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.736 11:07:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.736 11:07:01 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.994 11:07:02 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.994 11:07:02 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.252 11:07:02 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.511 [2024-12-06 11:07:02.537858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.511 11:07:02 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:51.770 11:07:02 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:52.029 11:07:03 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.288 11:07:03 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:52.288 11:07:03 -- common/autotest_common.sh@1187 -- # local i=0 00:10:52.288 11:07:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.288 11:07:03 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:52.288 11:07:03 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:52.288 11:07:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:54.191 11:07:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:54.191 11:07:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:54.191 11:07:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.191 11:07:05 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:54.191 11:07:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.191 11:07:05 -- common/autotest_common.sh@1197 -- # return 0 00:10:54.191 11:07:05 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.191 [global] 00:10:54.191 thread=1 00:10:54.191 invalidate=1 00:10:54.191 rw=write 00:10:54.191 time_based=1 00:10:54.191 runtime=1 00:10:54.191 ioengine=libaio 00:10:54.191 direct=1 00:10:54.191 bs=4096 00:10:54.191 iodepth=1 00:10:54.191 norandommap=0 00:10:54.191 numjobs=1 00:10:54.191 00:10:54.191 verify_dump=1 00:10:54.191 verify_backlog=512 00:10:54.191 verify_state_save=0 00:10:54.191 do_verify=1 00:10:54.191 verify=crc32c-intel 00:10:54.191 [job0] 00:10:54.191 filename=/dev/nvme0n1 00:10:54.191 [job1] 00:10:54.191 filename=/dev/nvme0n2 00:10:54.191 [job2] 00:10:54.191 filename=/dev/nvme0n3 00:10:54.191 [job3] 00:10:54.191 filename=/dev/nvme0n4 00:10:54.191 Could not set queue depth (nvme0n1) 00:10:54.191 Could not set queue depth (nvme0n2) 00:10:54.191 Could not set queue depth (nvme0n3) 00:10:54.191 Could not set queue depth (nvme0n4) 00:10:54.448 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.448 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.448 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.448 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.448 fio-3.35 00:10:54.448 Starting 4 threads 00:10:55.821 00:10:55.821 job0: (groupid=0, jobs=1): err= 0: pid=75600: Fri Dec 6 11:07:06 2024 00:10:55.821 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:10:55.821 slat (nsec): min=10869, max=52100, avg=14218.21, stdev=3445.58 00:10:55.821 clat (usec): min=129, max=690, avg=165.42, stdev=20.05 00:10:55.821 lat (usec): min=141, max=703, avg=179.64, stdev=20.45 00:10:55.821 clat percentiles (usec): 00:10:55.821 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:55.821 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 169], 00:10:55.821 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:10:55.821 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 243], 99.95th=[ 482], 00:10:55.821 | 99.99th=[ 693] 00:10:55.821 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:55.821 slat (nsec): min=14783, max=75153, avg=22098.59, stdev=5235.61 00:10:55.821 clat (usec): min=88, max=235, avg=123.07, stdev=13.75 00:10:55.821 lat (usec): min=107, max=311, avg=145.17, stdev=14.53 00:10:55.821 clat percentiles (usec): 00:10:55.821 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 113], 00:10:55.821 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 126], 00:10:55.821 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 149], 00:10:55.821 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 204], 00:10:55.821 | 99.99th=[ 237] 00:10:55.821 bw ( KiB/s): min=12288, max=12288, per=40.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:55.821 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:55.821 lat (usec) : 100=1.38%, 250=98.57%, 500=0.03%, 750=0.02% 00:10:55.821 cpu : usr=2.10%, sys=8.90%, ctx=6099, majf=0, minf=11 00:10:55.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.821 issued rwts: total=3027,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.821 job1: (groupid=0, jobs=1): err= 0: pid=75601: Fri Dec 6 11:07:06 2024 00:10:55.821 read: IOPS=1460, BW=5842KiB/s (5982kB/s)(5848KiB/1001msec) 00:10:55.821 slat (nsec): min=13931, max=56511, avg=18854.78, stdev=4436.14 00:10:55.821 clat (usec): min=242, max=517, avg=335.26, stdev=26.53 00:10:55.821 lat (usec): min=257, max=542, avg=354.11, stdev=27.57 00:10:55.821 clat percentiles (usec): 00:10:55.821 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:10:55.821 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:10:55.821 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 371], 00:10:55.821 | 99.00th=[ 412], 99.50th=[ 469], 99.90th=[ 515], 99.95th=[ 519], 00:10:55.821 | 99.99th=[ 519] 00:10:55.821 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:55.821 slat (nsec): min=13454, max=86676, avg=24963.02, stdev=7236.99 00:10:55.821 clat (usec): min=157, max=679, avg=284.79, stdev=43.28 00:10:55.821 lat (usec): min=183, max=716, avg=309.76, stdev=44.42 00:10:55.822 clat percentiles (usec): 00:10:55.822 | 1.00th=[ 184], 5.00th=[ 243], 10.00th=[ 258], 20.00th=[ 265], 00:10:55.822 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:55.822 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 338], 00:10:55.822 | 99.00th=[ 469], 99.50th=[ 519], 99.90th=[ 660], 99.95th=[ 685], 00:10:55.822 | 99.99th=[ 685] 00:10:55.822 bw ( KiB/s): min= 8040, max= 8040, per=26.20%, avg=8040.00, stdev= 0.00, samples=1 00:10:55.822 iops : min= 2010, max= 2010, avg=2010.00, stdev= 0.00, samples=1 00:10:55.822 lat (usec) : 250=3.07%, 500=96.46%, 750=0.47% 00:10:55.822 cpu : usr=1.10%, sys=6.00%, ctx=2999, majf=0, minf=5 00:10:55.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 issued rwts: total=1462,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.822 job2: (groupid=0, jobs=1): err= 0: pid=75602: Fri Dec 6 11:07:06 2024 00:10:55.822 read: IOPS=1461, BW=5846KiB/s (5986kB/s)(5852KiB/1001msec) 00:10:55.822 slat (nsec): min=9913, max=43362, avg=13260.16, stdev=3585.91 00:10:55.822 clat (usec): min=166, max=536, avg=341.23, stdev=29.50 00:10:55.822 lat (usec): min=202, max=547, avg=354.49, stdev=28.97 00:10:55.822 clat percentiles (usec): 00:10:55.822 | 1.00th=[ 262], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 322], 00:10:55.822 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:10:55.822 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:10:55.822 | 99.00th=[ 416], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 537], 00:10:55.822 | 99.99th=[ 537] 00:10:55.822 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:55.822 slat (nsec): min=13416, max=83237, avg=27093.07, stdev=7957.37 00:10:55.822 clat (usec): min=162, max=693, avg=282.57, stdev=42.18 00:10:55.822 lat (usec): min=182, max=713, avg=309.66, stdev=44.14 00:10:55.822 clat percentiles (usec): 00:10:55.822 | 1.00th=[ 182], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 265], 00:10:55.822 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:10:55.822 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 338], 00:10:55.822 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 644], 99.95th=[ 693], 00:10:55.822 | 99.99th=[ 693] 00:10:55.822 bw ( KiB/s): min= 8040, max= 8040, per=26.20%, avg=8040.00, stdev= 0.00, samples=1 00:10:55.822 iops : min= 2010, max= 2010, avg=2010.00, stdev= 0.00, samples=1 00:10:55.822 lat (usec) : 250=3.53%, 500=96.10%, 750=0.37% 00:10:55.822 cpu : usr=1.90%, sys=4.60%, ctx=2999, majf=0, minf=9 00:10:55.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 issued rwts: total=1463,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.822 job3: (groupid=0, jobs=1): err= 0: pid=75603: Fri Dec 6 11:07:06 2024 00:10:55.822 read: IOPS=1370, BW=5483KiB/s (5614kB/s)(5488KiB/1001msec) 00:10:55.822 slat (nsec): min=18688, max=80770, avg=25968.24, stdev=6880.39 00:10:55.822 clat (usec): min=267, max=699, avg=354.48, stdev=58.53 00:10:55.822 lat (usec): min=304, max=730, avg=380.45, stdev=60.94 00:10:55.822 clat percentiles (usec): 00:10:55.822 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:10:55.822 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 343], 00:10:55.822 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 449], 95.00th=[ 486], 00:10:55.822 | 99.00th=[ 545], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 701], 00:10:55.822 | 99.99th=[ 701] 00:10:55.822 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:55.822 slat (nsec): min=24913, max=91087, avg=36860.12, stdev=8989.09 00:10:55.822 clat (usec): min=113, max=4153, avg=268.38, stdev=134.31 00:10:55.822 lat (usec): min=142, max=4187, avg=305.24, stdev=134.14 00:10:55.822 clat percentiles (usec): 00:10:55.822 | 1.00th=[ 128], 5.00th=[ 206], 10.00th=[ 235], 20.00th=[ 249], 00:10:55.822 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:10:55.822 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:10:55.822 | 99.00th=[ 400], 99.50th=[ 494], 99.90th=[ 3261], 99.95th=[ 4146], 00:10:55.822 | 99.99th=[ 4146] 00:10:55.822 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:55.822 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:55.822 lat (usec) : 250=11.49%, 500=86.73%, 750=1.69% 00:10:55.822 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03% 00:10:55.822 cpu : usr=1.60%, sys=7.70%, ctx=2908, majf=0, minf=11 00:10:55.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.822 issued rwts: total=1372,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.822 00:10:55.822 Run status group 0 (all jobs): 00:10:55.822 READ: bw=28.6MiB/s (30.0MB/s), 5483KiB/s-11.8MiB/s (5614kB/s-12.4MB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:10:55.822 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:10:55.822 00:10:55.822 Disk stats (read/write): 00:10:55.822 nvme0n1: ios=2610/2668, merge=0/0, ticks=470/338, in_queue=808, util=87.37% 00:10:55.822 nvme0n2: ios=1090/1536, merge=0/0, ticks=379/410, in_queue=789, util=87.94% 00:10:55.822 nvme0n3: ios=1045/1536, merge=0/0, ticks=349/433, in_queue=782, util=89.19% 00:10:55.822 nvme0n4: ios=1045/1536, merge=0/0, ticks=362/430, in_queue=792, util=89.75% 00:10:55.822 11:07:06 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:55.822 [global] 00:10:55.822 thread=1 00:10:55.822 invalidate=1 00:10:55.822 rw=randwrite 00:10:55.822 time_based=1 00:10:55.822 runtime=1 00:10:55.822 ioengine=libaio 00:10:55.822 direct=1 00:10:55.822 bs=4096 00:10:55.822 iodepth=1 00:10:55.822 norandommap=0 00:10:55.822 numjobs=1 00:10:55.822 00:10:55.822 verify_dump=1 00:10:55.822 verify_backlog=512 00:10:55.822 verify_state_save=0 00:10:55.822 do_verify=1 00:10:55.822 verify=crc32c-intel 00:10:55.822 [job0] 00:10:55.822 filename=/dev/nvme0n1 00:10:55.822 [job1] 00:10:55.822 filename=/dev/nvme0n2 00:10:55.822 [job2] 00:10:55.822 filename=/dev/nvme0n3 00:10:55.822 [job3] 00:10:55.822 filename=/dev/nvme0n4 00:10:55.822 Could not set queue depth (nvme0n1) 00:10:55.822 Could not set queue depth (nvme0n2) 00:10:55.822 Could not set queue depth (nvme0n3) 00:10:55.822 Could not set queue depth (nvme0n4) 00:10:55.822 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.822 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.822 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.822 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.822 fio-3.35 00:10:55.822 Starting 4 threads 00:10:57.211 00:10:57.211 job0: (groupid=0, jobs=1): err= 0: pid=75656: Fri Dec 6 11:07:07 2024 00:10:57.211 read: IOPS=2046, BW=8188KiB/s (8384kB/s)(8196KiB/1001msec) 00:10:57.211 slat (nsec): min=8003, max=59858, avg=14667.85, stdev=5049.76 00:10:57.211 clat (usec): min=129, max=7202, avg=247.42, stdev=178.45 00:10:57.211 lat (usec): min=146, max=7216, avg=262.09, stdev=178.20 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 176], 00:10:57.211 | 30.00th=[ 206], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 258], 00:10:57.211 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 334], 00:10:57.211 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 1401], 99.95th=[ 2769], 00:10:57.211 | 99.99th=[ 7177] 00:10:57.211 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:57.211 slat (usec): min=11, max=258, avg=22.14, stdev= 9.36 00:10:57.211 clat (usec): min=99, max=986, avg=155.23, stdev=43.20 00:10:57.211 lat (usec): min=121, max=1021, avg=177.37, stdev=45.71 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 106], 5.00th=[ 114], 10.00th=[ 119], 20.00th=[ 126], 00:10:57.211 | 30.00th=[ 131], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 153], 00:10:57.211 | 70.00th=[ 161], 80.00th=[ 180], 90.00th=[ 208], 95.00th=[ 233], 00:10:57.211 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 400], 99.95th=[ 578], 00:10:57.211 | 99.99th=[ 988] 00:10:57.211 bw ( KiB/s): min=11153, max=11153, per=30.77%, avg=11153.00, stdev= 0.00, samples=1 00:10:57.211 iops : min= 2788, max= 2788, avg=2788.00, stdev= 0.00, samples=1 00:10:57.211 lat (usec) : 100=0.04%, 250=79.58%, 500=20.18%, 750=0.07%, 1000=0.02% 00:10:57.211 lat (msec) : 2=0.07%, 4=0.02%, 10=0.02% 00:10:57.211 cpu : usr=2.20%, sys=6.80%, ctx=4624, majf=0, minf=11 00:10:57.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.211 job1: (groupid=0, jobs=1): err= 0: pid=75657: Fri Dec 6 11:07:07 2024 00:10:57.211 read: IOPS=1947, BW=7788KiB/s (7975kB/s)(7796KiB/1001msec) 00:10:57.211 slat (nsec): min=7936, max=65415, avg=15067.61, stdev=6566.33 00:10:57.211 clat (usec): min=178, max=920, avg=276.78, stdev=59.40 00:10:57.211 lat (usec): min=197, max=933, avg=291.85, stdev=60.82 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 221], 00:10:57.211 | 30.00th=[ 235], 40.00th=[ 247], 50.00th=[ 273], 60.00th=[ 293], 00:10:57.211 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 383], 00:10:57.211 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 660], 99.95th=[ 922], 00:10:57.211 | 99.99th=[ 922] 00:10:57.211 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:57.211 slat (usec): min=11, max=359, avg=27.00, stdev=17.03 00:10:57.211 clat (usec): min=109, max=3020, avg=179.21, stdev=90.44 00:10:57.211 lat (usec): min=130, max=3068, avg=206.21, stdev=100.30 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 130], 00:10:57.211 | 30.00th=[ 137], 40.00th=[ 145], 50.00th=[ 155], 60.00th=[ 163], 00:10:57.211 | 70.00th=[ 188], 80.00th=[ 212], 90.00th=[ 306], 95.00th=[ 334], 00:10:57.211 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 412], 99.95th=[ 429], 00:10:57.211 | 99.99th=[ 3032] 00:10:57.211 bw ( KiB/s): min= 8175, max= 8175, per=22.55%, avg=8175.00, stdev= 0.00, samples=1 00:10:57.211 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:57.211 lat (usec) : 250=64.90%, 500=34.98%, 750=0.08%, 1000=0.03% 00:10:57.211 lat (msec) : 4=0.03% 00:10:57.211 cpu : usr=1.70%, sys=7.10%, ctx=4011, majf=0, minf=17 00:10:57.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 issued rwts: total=1949,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.211 job2: (groupid=0, jobs=1): err= 0: pid=75658: Fri Dec 6 11:07:07 2024 00:10:57.211 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:57.211 slat (nsec): min=9712, max=55229, avg=14627.41, stdev=4423.91 00:10:57.211 clat (usec): min=136, max=1913, avg=222.27, stdev=54.14 00:10:57.211 lat (usec): min=151, max=1928, avg=236.90, stdev=53.89 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 184], 00:10:57.211 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:10:57.211 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:10:57.211 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 603], 99.95th=[ 619], 00:10:57.211 | 99.99th=[ 1909] 00:10:57.211 write: IOPS=2412, BW=9650KiB/s (9882kB/s)(9660KiB/1001msec); 0 zone resets 00:10:57.211 slat (nsec): min=10739, max=75902, avg=23329.35, stdev=7840.61 00:10:57.211 clat (usec): min=102, max=474, avg=186.28, stdev=49.41 00:10:57.211 lat (usec): min=129, max=493, avg=209.61, stdev=47.47 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 112], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 133], 00:10:57.211 | 30.00th=[ 141], 40.00th=[ 157], 50.00th=[ 200], 60.00th=[ 212], 00:10:57.211 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 255], 00:10:57.211 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 375], 99.95th=[ 424], 00:10:57.211 | 99.99th=[ 474] 00:10:57.211 bw ( KiB/s): min=11560, max=11560, per=31.89%, avg=11560.00, stdev= 0.00, samples=1 00:10:57.211 iops : min= 2890, max= 2890, avg=2890.00, stdev= 0.00, samples=1 00:10:57.211 lat (usec) : 250=87.74%, 500=12.19%, 750=0.04% 00:10:57.211 lat (msec) : 2=0.02% 00:10:57.211 cpu : usr=1.70%, sys=7.40%, ctx=4464, majf=0, minf=8 00:10:57.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 issued rwts: total=2048,2415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.211 job3: (groupid=0, jobs=1): err= 0: pid=75659: Fri Dec 6 11:07:07 2024 00:10:57.211 read: IOPS=1826, BW=7305KiB/s (7480kB/s)(7312KiB/1001msec) 00:10:57.211 slat (nsec): min=7966, max=62082, avg=15577.98, stdev=6687.39 00:10:57.211 clat (usec): min=163, max=6399, avg=270.23, stdev=174.74 00:10:57.211 lat (usec): min=176, max=6432, avg=285.81, stdev=177.09 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 223], 00:10:57.211 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:10:57.211 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 506], 00:10:57.211 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 2376], 99.95th=[ 6390], 00:10:57.211 | 99.99th=[ 6390] 00:10:57.211 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:57.211 slat (nsec): min=12940, max=68710, avg=21002.27, stdev=5977.01 00:10:57.211 clat (usec): min=105, max=543, avg=208.52, stdev=37.51 00:10:57.211 lat (usec): min=124, max=562, avg=229.53, stdev=36.96 00:10:57.211 clat percentiles (usec): 00:10:57.211 | 1.00th=[ 118], 5.00th=[ 130], 10.00th=[ 145], 20.00th=[ 188], 00:10:57.211 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:10:57.211 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 253], 00:10:57.211 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 449], 99.95th=[ 474], 00:10:57.211 | 99.99th=[ 545] 00:10:57.211 bw ( KiB/s): min= 8192, max= 8192, per=22.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:57.211 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:57.211 lat (usec) : 250=76.63%, 500=20.90%, 750=2.35%, 1000=0.05% 00:10:57.211 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03% 00:10:57.211 cpu : usr=1.70%, sys=6.10%, ctx=3879, majf=0, minf=11 00:10:57.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.211 issued rwts: total=1828,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.211 00:10:57.211 Run status group 0 (all jobs): 00:10:57.211 READ: bw=30.7MiB/s (32.2MB/s), 7305KiB/s-8188KiB/s (7480kB/s-8384kB/s), io=30.8MiB (32.3MB), run=1001-1001msec 00:10:57.211 WRITE: bw=35.4MiB/s (37.1MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=35.4MiB (37.2MB), run=1001-1001msec 00:10:57.211 00:10:57.211 Disk stats (read/write): 00:10:57.211 nvme0n1: ios=1912/2048, merge=0/0, ticks=470/321, in_queue=791, util=86.97% 00:10:57.211 nvme0n2: ios=1585/1952, merge=0/0, ticks=450/353, in_queue=803, util=88.19% 00:10:57.211 nvme0n3: ios=1860/2048, merge=0/0, ticks=416/374, in_queue=790, util=89.25% 00:10:57.211 nvme0n4: ios=1536/1783, merge=0/0, ticks=425/373, in_queue=798, util=89.19% 00:10:57.211 11:07:07 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:57.211 [global] 00:10:57.211 thread=1 00:10:57.211 invalidate=1 00:10:57.211 rw=write 00:10:57.211 time_based=1 00:10:57.211 runtime=1 00:10:57.211 ioengine=libaio 00:10:57.211 direct=1 00:10:57.211 bs=4096 00:10:57.211 iodepth=128 00:10:57.211 norandommap=0 00:10:57.211 numjobs=1 00:10:57.211 00:10:57.211 verify_dump=1 00:10:57.211 verify_backlog=512 00:10:57.211 verify_state_save=0 00:10:57.212 do_verify=1 00:10:57.212 verify=crc32c-intel 00:10:57.212 [job0] 00:10:57.212 filename=/dev/nvme0n1 00:10:57.212 [job1] 00:10:57.212 filename=/dev/nvme0n2 00:10:57.212 [job2] 00:10:57.212 filename=/dev/nvme0n3 00:10:57.212 [job3] 00:10:57.212 filename=/dev/nvme0n4 00:10:57.212 Could not set queue depth (nvme0n1) 00:10:57.212 Could not set queue depth (nvme0n2) 00:10:57.212 Could not set queue depth (nvme0n3) 00:10:57.212 Could not set queue depth (nvme0n4) 00:10:57.212 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.212 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.212 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.212 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.212 fio-3.35 00:10:57.212 Starting 4 threads 00:10:58.600 00:10:58.600 job0: (groupid=0, jobs=1): err= 0: pid=75718: Fri Dec 6 11:07:09 2024 00:10:58.600 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:58.600 slat (usec): min=3, max=2668, avg=81.02, stdev=377.69 00:10:58.600 clat (usec): min=8081, max=12223, avg=10946.22, stdev=543.24 00:10:58.600 lat (usec): min=10120, max=12235, avg=11027.24, stdev=390.61 00:10:58.600 clat percentiles (usec): 00:10:58.600 | 1.00th=[ 8586], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:10:58.600 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:10:58.600 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:10:58.600 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:10:58.600 | 99.99th=[12256] 00:10:58.600 write: IOPS=5927, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1001msec); 0 zone resets 00:10:58.600 slat (usec): min=9, max=2577, avg=84.40, stdev=352.59 00:10:58.600 clat (usec): min=948, max=12018, avg=10933.28, stdev=969.31 00:10:58.600 lat (usec): min=966, max=12121, avg=11017.68, stdev=906.31 00:10:58.600 clat percentiles (usec): 00:10:58.601 | 1.00th=[ 6259], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10683], 00:10:58.601 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:58.601 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:10:58.601 | 99.00th=[11994], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:10:58.601 | 99.99th=[11994] 00:10:58.601 bw ( KiB/s): min=21872, max=24576, per=34.57%, avg=23224.00, stdev=1912.02, samples=2 00:10:58.601 iops : min= 5468, max= 6144, avg=5806.00, stdev=478.00, samples=2 00:10:58.601 lat (usec) : 1000=0.03% 00:10:58.601 lat (msec) : 2=0.09%, 4=0.28%, 10=3.80%, 20=95.81% 00:10:58.601 cpu : usr=5.30%, sys=14.50%, ctx=364, majf=0, minf=1 00:10:58.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:58.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.601 issued rwts: total=5632,5933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.601 job1: (groupid=0, jobs=1): err= 0: pid=75719: Fri Dec 6 11:07:09 2024 00:10:58.601 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:10:58.601 slat (usec): min=3, max=7726, avg=175.36, stdev=804.65 00:10:58.601 clat (usec): min=16378, max=28890, avg=22813.62, stdev=1753.74 00:10:58.601 lat (usec): min=17554, max=29169, avg=22988.98, stdev=1645.12 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[17957], 5.00th=[19530], 10.00th=[20055], 20.00th=[21627], 00:10:58.601 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:10:58.601 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[25822], 00:10:58.601 | 99.00th=[26346], 99.50th=[26870], 99.90th=[28967], 99.95th=[28967], 00:10:58.601 | 99.99th=[28967] 00:10:58.601 write: IOPS=2944, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1007msec); 0 zone resets 00:10:58.601 slat (usec): min=10, max=6197, avg=178.05, stdev=819.86 00:10:58.601 clat (usec): min=5499, max=31252, avg=23079.74, stdev=2498.45 00:10:58.601 lat (usec): min=8647, max=31962, avg=23257.79, stdev=2390.55 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[11338], 5.00th=[18744], 10.00th=[20317], 20.00th=[22414], 00:10:58.601 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:10:58.601 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25822], 95.00th=[26608], 00:10:58.601 | 99.00th=[28967], 99.50th=[30278], 99.90th=[31327], 99.95th=[31327], 00:10:58.601 | 99.99th=[31327] 00:10:58.601 bw ( KiB/s): min=10408, max=12288, per=16.89%, avg=11348.00, stdev=1329.36, samples=2 00:10:58.601 iops : min= 2602, max= 3072, avg=2837.00, stdev=332.34, samples=2 00:10:58.601 lat (msec) : 10=0.29%, 20=8.25%, 50=91.46% 00:10:58.601 cpu : usr=2.68%, sys=8.45%, ctx=427, majf=0, minf=1 00:10:58.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:58.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.601 issued rwts: total=2560,2965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.601 job2: (groupid=0, jobs=1): err= 0: pid=75720: Fri Dec 6 11:07:09 2024 00:10:58.601 read: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:10:58.601 slat (usec): min=8, max=2938, avg=94.29, stdev=443.59 00:10:58.601 clat (usec): min=205, max=16908, avg=12551.41, stdev=1219.00 00:10:58.601 lat (usec): min=2784, max=16921, avg=12645.71, stdev=1132.70 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[ 6259], 5.00th=[10945], 10.00th=[12125], 20.00th=[12256], 00:10:58.601 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:10:58.601 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:10:58.601 | 99.00th=[16319], 99.50th=[16712], 99.90th=[16909], 99.95th=[16909], 00:10:58.601 | 99.99th=[16909] 00:10:58.601 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:58.601 slat (usec): min=10, max=5658, avg=97.60, stdev=424.08 00:10:58.601 clat (usec): min=9405, max=16636, avg=12728.70, stdev=720.04 00:10:58.601 lat (usec): min=10726, max=16926, avg=12826.30, stdev=593.81 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:58.601 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:58.601 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:10:58.601 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:10:58.601 | 99.99th=[16581] 00:10:58.601 bw ( KiB/s): min=20480, max=20480, per=30.48%, avg=20480.00, stdev= 0.00, samples=2 00:10:58.601 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:58.601 lat (usec) : 250=0.01% 00:10:58.601 lat (msec) : 4=0.32%, 10=1.37%, 20=98.30% 00:10:58.601 cpu : usr=5.19%, sys=13.07%, ctx=315, majf=0, minf=1 00:10:58.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.601 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.601 job3: (groupid=0, jobs=1): err= 0: pid=75721: Fri Dec 6 11:07:09 2024 00:10:58.601 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:58.601 slat (usec): min=4, max=6453, avg=176.68, stdev=794.56 00:10:58.601 clat (usec): min=16719, max=28438, avg=23426.74, stdev=1884.70 00:10:58.601 lat (usec): min=17804, max=28466, avg=23603.42, stdev=1774.38 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[17957], 5.00th=[20055], 10.00th=[21103], 20.00th=[22152], 00:10:58.601 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:10:58.601 | 70.00th=[23987], 80.00th=[24773], 90.00th=[25822], 95.00th=[26870], 00:10:58.601 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:10:58.601 | 99.99th=[28443] 00:10:58.601 write: IOPS=2880, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec); 0 zone resets 00:10:58.601 slat (usec): min=6, max=6432, avg=181.15, stdev=806.69 00:10:58.601 clat (usec): min=4485, max=28108, avg=22940.71, stdev=2596.19 00:10:58.601 lat (usec): min=5464, max=28811, avg=23121.86, stdev=2499.43 00:10:58.601 clat percentiles (usec): 00:10:58.601 | 1.00th=[ 8029], 5.00th=[19268], 10.00th=[21103], 20.00th=[22414], 00:10:58.601 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:10:58.601 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25297], 95.00th=[26608], 00:10:58.601 | 99.00th=[27132], 99.50th=[27919], 99.90th=[27919], 99.95th=[28181], 00:10:58.601 | 99.99th=[28181] 00:10:58.601 bw ( KiB/s): min= 9856, max=12288, per=16.48%, avg=11072.00, stdev=1719.68, samples=2 00:10:58.601 iops : min= 2464, max= 3072, avg=2768.00, stdev=429.92, samples=2 00:10:58.601 lat (msec) : 10=0.59%, 20=4.84%, 50=94.57% 00:10:58.601 cpu : usr=2.29%, sys=8.57%, ctx=417, majf=0, minf=8 00:10:58.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:58.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.601 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.601 00:10:58.601 Run status group 0 (all jobs): 00:10:58.601 READ: bw=60.7MiB/s (63.7MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.1MB), run=1001-1007msec 00:10:58.601 WRITE: bw=65.6MiB/s (68.8MB/s), 11.3MiB/s-23.2MiB/s (11.8MB/s-24.3MB/s), io=66.1MiB (69.3MB), run=1001-1007msec 00:10:58.601 00:10:58.601 Disk stats (read/write): 00:10:58.601 nvme0n1: ios=4818/5120, merge=0/0, ticks=11434/12113, in_queue=23547, util=87.07% 00:10:58.601 nvme0n2: ios=2197/2560, merge=0/0, ticks=11614/12346, in_queue=23960, util=87.35% 00:10:58.601 nvme0n3: ios=4096/4448, merge=0/0, ticks=11425/12268, in_queue=23693, util=89.18% 00:10:58.601 nvme0n4: ios=2110/2560, merge=0/0, ticks=11035/12113, in_queue=23148, util=89.12% 00:10:58.601 11:07:09 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:58.601 [global] 00:10:58.601 thread=1 00:10:58.601 invalidate=1 00:10:58.601 rw=randwrite 00:10:58.601 time_based=1 00:10:58.601 runtime=1 00:10:58.601 ioengine=libaio 00:10:58.601 direct=1 00:10:58.601 bs=4096 00:10:58.601 iodepth=128 00:10:58.601 norandommap=0 00:10:58.601 numjobs=1 00:10:58.601 00:10:58.601 verify_dump=1 00:10:58.601 verify_backlog=512 00:10:58.601 verify_state_save=0 00:10:58.601 do_verify=1 00:10:58.601 verify=crc32c-intel 00:10:58.601 [job0] 00:10:58.601 filename=/dev/nvme0n1 00:10:58.601 [job1] 00:10:58.601 filename=/dev/nvme0n2 00:10:58.601 [job2] 00:10:58.601 filename=/dev/nvme0n3 00:10:58.601 [job3] 00:10:58.601 filename=/dev/nvme0n4 00:10:58.601 Could not set queue depth (nvme0n1) 00:10:58.601 Could not set queue depth (nvme0n2) 00:10:58.601 Could not set queue depth (nvme0n3) 00:10:58.601 Could not set queue depth (nvme0n4) 00:10:58.601 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.601 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.601 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.601 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.601 fio-3.35 00:10:58.601 Starting 4 threads 00:10:59.976 00:10:59.976 job0: (groupid=0, jobs=1): err= 0: pid=75781: Fri Dec 6 11:07:10 2024 00:10:59.976 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:59.976 slat (usec): min=3, max=7136, avg=103.26, stdev=498.28 00:10:59.976 clat (usec): min=8089, max=35281, avg=13764.57, stdev=5513.83 00:10:59.976 lat (usec): min=8115, max=35867, avg=13867.83, stdev=5559.86 00:10:59.976 clat percentiles (usec): 00:10:59.976 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:10:59.976 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:10:59.976 | 70.00th=[12518], 80.00th=[19792], 90.00th=[23200], 95.00th=[25822], 00:10:59.976 | 99.00th=[29754], 99.50th=[30278], 99.90th=[35390], 99.95th=[35390], 00:10:59.976 | 99.99th=[35390] 00:10:59.976 write: IOPS=5108, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:59.976 slat (usec): min=10, max=5922, avg=95.54, stdev=478.20 00:10:59.976 clat (usec): min=365, max=26281, avg=12356.23, stdev=3481.97 00:10:59.976 lat (usec): min=3267, max=26301, avg=12451.77, stdev=3519.04 00:10:59.976 clat percentiles (usec): 00:10:59.976 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10552], 00:10:59.976 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:10:59.976 | 70.00th=[11600], 80.00th=[13304], 90.00th=[17171], 95.00th=[21627], 00:10:59.976 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:10:59.976 | 99.99th=[26346] 00:10:59.976 bw ( KiB/s): min=15352, max=24576, per=29.77%, avg=19964.00, stdev=6522.35, samples=2 00:10:59.976 iops : min= 3838, max= 6144, avg=4991.00, stdev=1630.59, samples=2 00:10:59.976 lat (usec) : 500=0.01% 00:10:59.976 lat (msec) : 4=0.29%, 10=13.23%, 20=73.79%, 50=12.68% 00:10:59.976 cpu : usr=4.10%, sys=13.09%, ctx=474, majf=0, minf=12 00:10:59.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:59.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.976 issued rwts: total=4608,5119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.976 job1: (groupid=0, jobs=1): err= 0: pid=75782: Fri Dec 6 11:07:10 2024 00:10:59.976 read: IOPS=2126, BW=8506KiB/s (8711kB/s)(8532KiB/1003msec) 00:10:59.976 slat (usec): min=3, max=9853, avg=189.29, stdev=837.80 00:10:59.976 clat (usec): min=424, max=46455, avg=22599.00, stdev=4988.11 00:10:59.976 lat (usec): min=4653, max=46472, avg=22788.29, stdev=5037.38 00:10:59.976 clat percentiles (usec): 00:10:59.976 | 1.00th=[ 9765], 5.00th=[16909], 10.00th=[17957], 20.00th=[19530], 00:10:59.976 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21365], 60.00th=[22938], 00:10:59.976 | 70.00th=[23987], 80.00th=[26084], 90.00th=[29754], 95.00th=[30016], 00:10:59.976 | 99.00th=[39060], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:10:59.976 | 99.99th=[46400] 00:10:59.976 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:59.976 slat (usec): min=5, max=6626, avg=225.13, stdev=856.40 00:10:59.976 clat (usec): min=13951, max=62711, avg=30528.63, stdev=12347.62 00:10:59.976 lat (usec): min=13971, max=62736, avg=30753.76, stdev=12424.59 00:10:59.976 clat percentiles (usec): 00:10:59.976 | 1.00th=[14615], 5.00th=[16319], 10.00th=[17957], 20.00th=[20579], 00:10:59.976 | 30.00th=[21627], 40.00th=[21890], 50.00th=[26346], 60.00th=[32637], 00:10:59.976 | 70.00th=[36439], 80.00th=[40633], 90.00th=[50070], 95.00th=[55837], 00:10:59.977 | 99.00th=[61604], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:10:59.977 | 99.99th=[62653] 00:10:59.977 bw ( KiB/s): min= 8656, max=11480, per=15.01%, avg=10068.00, stdev=1996.87, samples=2 00:10:59.977 iops : min= 2164, max= 2870, avg=2517.00, stdev=499.22, samples=2 00:10:59.977 lat (usec) : 500=0.02% 00:10:59.977 lat (msec) : 10=0.79%, 20=18.37%, 50=75.28%, 100=5.54% 00:10:59.977 cpu : usr=1.80%, sys=7.68%, ctx=413, majf=0, minf=13 00:10:59.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:59.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.977 issued rwts: total=2133,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.977 job2: (groupid=0, jobs=1): err= 0: pid=75783: Fri Dec 6 11:07:10 2024 00:10:59.977 read: IOPS=4878, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1004msec) 00:10:59.977 slat (usec): min=4, max=3172, avg=94.81, stdev=440.14 00:10:59.977 clat (usec): min=149, max=13662, avg=12511.24, stdev=1084.89 00:10:59.977 lat (usec): min=3322, max=15493, avg=12606.05, stdev=996.53 00:10:59.977 clat percentiles (usec): 00:10:59.977 | 1.00th=[ 6521], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:10:59.977 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:10:59.977 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:10:59.977 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13566], 99.95th=[13566], 00:10:59.977 | 99.99th=[13698] 00:10:59.977 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:59.977 slat (usec): min=9, max=2966, avg=97.16, stdev=405.91 00:10:59.977 clat (usec): min=9256, max=14596, avg=12775.13, stdev=608.48 00:10:59.977 lat (usec): min=10339, max=15609, avg=12872.30, stdev=480.36 00:10:59.977 clat percentiles (usec): 00:10:59.977 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:59.977 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:59.977 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13698], 00:10:59.977 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14484], 99.95th=[14484], 00:10:59.977 | 99.99th=[14615] 00:10:59.977 bw ( KiB/s): min=20480, max=20480, per=30.54%, avg=20480.00, stdev= 0.00, samples=2 00:10:59.977 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:59.977 lat (usec) : 250=0.01% 00:10:59.977 lat (msec) : 4=0.32%, 10=1.18%, 20=98.49% 00:10:59.977 cpu : usr=5.08%, sys=14.26%, ctx=331, majf=0, minf=14 00:10:59.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:59.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.977 issued rwts: total=4898,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.977 job3: (groupid=0, jobs=1): err= 0: pid=75784: Fri Dec 6 11:07:10 2024 00:10:59.977 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:59.977 slat (usec): min=7, max=6576, avg=130.67, stdev=681.50 00:10:59.977 clat (usec): min=9301, max=27787, avg=17118.92, stdev=4649.30 00:10:59.977 lat (usec): min=11259, max=27801, avg=17249.59, stdev=4641.25 00:10:59.977 clat percentiles (usec): 00:10:59.977 | 1.00th=[10421], 5.00th=[12387], 10.00th=[12518], 20.00th=[12649], 00:10:59.977 | 30.00th=[12911], 40.00th=[13435], 50.00th=[17695], 60.00th=[18220], 00:10:59.977 | 70.00th=[18744], 80.00th=[19530], 90.00th=[25297], 95.00th=[27132], 00:10:59.977 | 99.00th=[27395], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:10:59.977 | 99.99th=[27657] 00:10:59.977 write: IOPS=4020, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec); 0 zone resets 00:10:59.977 slat (usec): min=10, max=6685, avg=124.57, stdev=596.65 00:10:59.977 clat (usec): min=141, max=28902, avg=16012.31, stdev=4936.76 00:10:59.977 lat (usec): min=2934, max=28928, avg=16136.88, stdev=4934.34 00:10:59.977 clat percentiles (usec): 00:10:59.977 | 1.00th=[ 6128], 5.00th=[11469], 10.00th=[12125], 20.00th=[12518], 00:10:59.977 | 30.00th=[12780], 40.00th=[13173], 50.00th=[15139], 60.00th=[15664], 00:10:59.977 | 70.00th=[16909], 80.00th=[19530], 90.00th=[23462], 95.00th=[28181], 00:10:59.977 | 99.00th=[28705], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:10:59.977 | 99.99th=[28967] 00:10:59.977 bw ( KiB/s): min=12263, max=18952, per=23.27%, avg=15607.50, stdev=4729.84, samples=2 00:10:59.977 iops : min= 3065, max= 4738, avg=3901.50, stdev=1182.99, samples=2 00:10:59.977 lat (usec) : 250=0.01% 00:10:59.977 lat (msec) : 4=0.42%, 10=1.14%, 20=81.13%, 50=17.29% 00:10:59.977 cpu : usr=3.39%, sys=11.18%, ctx=244, majf=0, minf=11 00:10:59.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:59.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.977 issued rwts: total=3584,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.977 00:10:59.977 Run status group 0 (all jobs): 00:10:59.977 READ: bw=59.2MiB/s (62.1MB/s), 8506KiB/s-19.1MiB/s (8711kB/s-20.0MB/s), io=59.5MiB (62.4MB), run=1002-1004msec 00:10:59.977 WRITE: bw=65.5MiB/s (68.7MB/s), 9.97MiB/s-20.0MiB/s (10.5MB/s-20.9MB/s), io=65.8MiB (68.9MB), run=1002-1004msec 00:10:59.977 00:10:59.977 Disk stats (read/write): 00:10:59.977 nvme0n1: ios=4282/4608, merge=0/0, ticks=16654/15421, in_queue=32075, util=88.26% 00:10:59.977 nvme0n2: ios=1939/2048, merge=0/0, ticks=14486/20018, in_queue=34504, util=89.68% 00:10:59.977 nvme0n3: ios=4096/4512, merge=0/0, ticks=11481/12057, in_queue=23538, util=89.29% 00:10:59.977 nvme0n4: ios=3089/3168, merge=0/0, ticks=12811/11793, in_queue=24604, util=89.95% 00:10:59.977 11:07:10 -- target/fio.sh@55 -- # sync 00:10:59.977 11:07:10 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:59.977 11:07:10 -- target/fio.sh@59 -- # fio_pid=75798 00:10:59.977 11:07:10 -- target/fio.sh@61 -- # sleep 3 00:10:59.977 [global] 00:10:59.977 thread=1 00:10:59.977 invalidate=1 00:10:59.977 rw=read 00:10:59.977 time_based=1 00:10:59.977 runtime=10 00:10:59.977 ioengine=libaio 00:10:59.977 direct=1 00:10:59.977 bs=4096 00:10:59.977 iodepth=1 00:10:59.977 norandommap=1 00:10:59.977 numjobs=1 00:10:59.977 00:10:59.977 [job0] 00:10:59.977 filename=/dev/nvme0n1 00:10:59.977 [job1] 00:10:59.977 filename=/dev/nvme0n2 00:10:59.977 [job2] 00:10:59.977 filename=/dev/nvme0n3 00:10:59.977 [job3] 00:10:59.977 filename=/dev/nvme0n4 00:10:59.977 Could not set queue depth (nvme0n1) 00:10:59.977 Could not set queue depth (nvme0n2) 00:10:59.977 Could not set queue depth (nvme0n3) 00:10:59.977 Could not set queue depth (nvme0n4) 00:10:59.977 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.977 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.977 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.977 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.977 fio-3.35 00:10:59.977 Starting 4 threads 00:11:03.257 11:07:13 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:03.257 fio: pid=75841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.257 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=62992384, buflen=4096 00:11:03.257 11:07:14 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:03.257 fio: pid=75840, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.257 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68091904, buflen=4096 00:11:03.257 11:07:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.257 11:07:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:03.515 fio: pid=75838, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.515 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55844864, buflen=4096 00:11:03.515 11:07:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.515 11:07:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:03.774 fio: pid=75839, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.774 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61059072, buflen=4096 00:11:03.774 00:11:03.774 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75838: Fri Dec 6 11:07:14 2024 00:11:03.774 read: IOPS=3911, BW=15.3MiB/s (16.0MB/s)(53.3MiB/3486msec) 00:11:03.774 slat (usec): min=10, max=15861, avg=16.55, stdev=204.63 00:11:03.774 clat (usec): min=122, max=3104, avg=237.67, stdev=47.83 00:11:03.774 lat (usec): min=134, max=16016, avg=254.22, stdev=210.23 00:11:03.774 clat percentiles (usec): 00:11:03.774 | 1.00th=[ 151], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 223], 00:11:03.774 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:11:03.774 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:11:03.774 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 529], 99.95th=[ 1139], 00:11:03.774 | 99.99th=[ 2540] 00:11:03.774 bw ( KiB/s): min=15464, max=15904, per=24.12%, avg=15610.67, stdev=153.52, samples=6 00:11:03.774 iops : min= 3866, max= 3976, avg=3902.67, stdev=38.38, samples=6 00:11:03.774 lat (usec) : 250=74.88%, 500=25.00%, 750=0.05% 00:11:03.774 lat (msec) : 2=0.04%, 4=0.02% 00:11:03.774 cpu : usr=1.12%, sys=4.65%, ctx=13640, majf=0, minf=1 00:11:03.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 issued rwts: total=13635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.774 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75839: Fri Dec 6 11:07:14 2024 00:11:03.774 read: IOPS=3983, BW=15.6MiB/s (16.3MB/s)(58.2MiB/3742msec) 00:11:03.774 slat (usec): min=11, max=13171, avg=18.48, stdev=208.88 00:11:03.774 clat (usec): min=48, max=7383, avg=230.98, stdev=93.34 00:11:03.774 lat (usec): min=127, max=13413, avg=249.46, stdev=229.48 00:11:03.774 clat percentiles (usec): 00:11:03.774 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 167], 20.00th=[ 217], 00:11:03.774 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:11:03.774 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:11:03.774 | 99.00th=[ 310], 99.50th=[ 367], 99.90th=[ 1172], 99.95th=[ 1778], 00:11:03.774 | 99.99th=[ 4424] 00:11:03.774 bw ( KiB/s): min=14928, max=16784, per=24.17%, avg=15641.14, stdev=657.40, samples=7 00:11:03.774 iops : min= 3732, max= 4196, avg=3910.29, stdev=164.35, samples=7 00:11:03.774 lat (usec) : 50=0.01%, 100=0.01%, 250=78.80%, 500=20.97%, 750=0.03% 00:11:03.774 lat (usec) : 1000=0.06% 00:11:03.774 lat (msec) : 2=0.08%, 4=0.03%, 10=0.01% 00:11:03.774 cpu : usr=1.15%, sys=4.87%, ctx=14930, majf=0, minf=1 00:11:03.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 issued rwts: total=14908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.774 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75840: Fri Dec 6 11:07:14 2024 00:11:03.774 read: IOPS=5198, BW=20.3MiB/s (21.3MB/s)(64.9MiB/3198msec) 00:11:03.774 slat (usec): min=7, max=7827, avg=14.68, stdev=82.22 00:11:03.774 clat (usec): min=130, max=7983, avg=176.11, stdev=75.02 00:11:03.774 lat (usec): min=144, max=8010, avg=190.79, stdev=111.14 00:11:03.774 clat percentiles (usec): 00:11:03.774 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:03.774 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:11:03.774 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 210], 95.00th=[ 229], 00:11:03.774 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 529], 99.95th=[ 1057], 00:11:03.774 | 99.99th=[ 3195] 00:11:03.774 bw ( KiB/s): min=17064, max=21784, per=32.04%, avg=20733.33, stdev=1817.64, samples=6 00:11:03.774 iops : min= 4266, max= 5446, avg=5183.33, stdev=454.41, samples=6 00:11:03.774 lat (usec) : 250=99.08%, 500=0.80%, 750=0.06% 00:11:03.774 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:11:03.774 cpu : usr=1.50%, sys=6.63%, ctx=16635, majf=0, minf=1 00:11:03.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 issued rwts: total=16625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.774 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75841: Fri Dec 6 11:07:14 2024 00:11:03.774 read: IOPS=5229, BW=20.4MiB/s (21.4MB/s)(60.1MiB/2941msec) 00:11:03.774 slat (usec): min=10, max=142, avg=14.28, stdev= 4.61 00:11:03.774 clat (usec): min=133, max=716, avg=175.32, stdev=22.68 00:11:03.774 lat (usec): min=145, max=728, avg=189.59, stdev=23.37 00:11:03.774 clat percentiles (usec): 00:11:03.774 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:11:03.774 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:03.774 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 212], 95.00th=[ 225], 00:11:03.774 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 289], 00:11:03.774 | 99.99th=[ 701] 00:11:03.774 bw ( KiB/s): min=20856, max=22048, per=33.34%, avg=21576.00, stdev=494.55, samples=5 00:11:03.774 iops : min= 5214, max= 5512, avg=5394.00, stdev=123.64, samples=5 00:11:03.774 lat (usec) : 250=99.58%, 500=0.40%, 750=0.01% 00:11:03.774 cpu : usr=1.60%, sys=6.87%, ctx=15380, majf=0, minf=2 00:11:03.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.774 issued rwts: total=15380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.774 00:11:03.774 Run status group 0 (all jobs): 00:11:03.774 READ: bw=63.2MiB/s (66.3MB/s), 15.3MiB/s-20.4MiB/s (16.0MB/s-21.4MB/s), io=237MiB (248MB), run=2941-3742msec 00:11:03.774 00:11:03.774 Disk stats (read/write): 00:11:03.774 nvme0n1: ios=13142/0, merge=0/0, ticks=3154/0, in_queue=3154, util=95.08% 00:11:03.774 nvme0n2: ios=14156/0, merge=0/0, ticks=3355/0, in_queue=3355, util=95.02% 00:11:03.774 nvme0n3: ios=16180/0, merge=0/0, ticks=2859/0, in_queue=2859, util=96.30% 00:11:03.774 nvme0n4: ios=15085/0, merge=0/0, ticks=2667/0, in_queue=2667, util=96.79% 00:11:03.774 11:07:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.774 11:07:14 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:04.032 11:07:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.032 11:07:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:04.599 11:07:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.599 11:07:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:04.599 11:07:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.599 11:07:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:04.858 11:07:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.858 11:07:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:05.117 11:07:16 -- target/fio.sh@69 -- # fio_status=0 00:11:05.117 11:07:16 -- target/fio.sh@70 -- # wait 75798 00:11:05.117 11:07:16 -- target/fio.sh@70 -- # fio_status=4 00:11:05.117 11:07:16 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.117 11:07:16 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.117 11:07:16 -- common/autotest_common.sh@1208 -- # local i=0 00:11:05.117 11:07:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:05.117 11:07:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.117 11:07:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:05.117 11:07:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.117 nvmf hotplug test: fio failed as expected 00:11:05.117 11:07:16 -- common/autotest_common.sh@1220 -- # return 0 00:11:05.117 11:07:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:05.117 11:07:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:05.117 11:07:16 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.685 11:07:16 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:05.685 11:07:16 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:05.685 11:07:16 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:05.685 11:07:16 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:05.685 11:07:16 -- target/fio.sh@91 -- # nvmftestfini 00:11:05.685 11:07:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:05.685 11:07:16 -- nvmf/common.sh@116 -- # sync 00:11:05.685 11:07:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:05.685 11:07:16 -- nvmf/common.sh@119 -- # set +e 00:11:05.685 11:07:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:05.685 11:07:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:05.685 rmmod nvme_tcp 00:11:05.685 rmmod nvme_fabrics 00:11:05.685 rmmod nvme_keyring 00:11:05.685 11:07:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:05.685 11:07:16 -- nvmf/common.sh@123 -- # set -e 00:11:05.685 11:07:16 -- nvmf/common.sh@124 -- # return 0 00:11:05.685 11:07:16 -- nvmf/common.sh@477 -- # '[' -n 75410 ']' 00:11:05.685 11:07:16 -- nvmf/common.sh@478 -- # killprocess 75410 00:11:05.685 11:07:16 -- common/autotest_common.sh@936 -- # '[' -z 75410 ']' 00:11:05.685 11:07:16 -- common/autotest_common.sh@940 -- # kill -0 75410 00:11:05.685 11:07:16 -- common/autotest_common.sh@941 -- # uname 00:11:05.685 11:07:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.685 11:07:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75410 00:11:05.685 killing process with pid 75410 00:11:05.685 11:07:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:05.685 11:07:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:05.685 11:07:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75410' 00:11:05.685 11:07:16 -- common/autotest_common.sh@955 -- # kill 75410 00:11:05.685 11:07:16 -- common/autotest_common.sh@960 -- # wait 75410 00:11:05.685 11:07:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:05.685 11:07:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:05.685 11:07:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:05.685 11:07:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.685 11:07:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:05.685 11:07:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.685 11:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.685 11:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.945 11:07:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:05.945 00:11:05.945 real 0m19.608s 00:11:05.945 user 1m13.997s 00:11:05.945 sys 0m10.406s 00:11:05.945 11:07:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.945 11:07:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.945 ************************************ 00:11:05.945 END TEST nvmf_fio_target 00:11:05.945 ************************************ 00:11:05.946 11:07:16 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:05.946 11:07:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:05.946 11:07:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.946 11:07:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.946 ************************************ 00:11:05.946 START TEST nvmf_bdevio 00:11:05.946 ************************************ 00:11:05.946 11:07:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:05.946 * Looking for test storage... 00:11:05.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:05.946 11:07:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.946 11:07:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.946 11:07:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.946 11:07:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.946 11:07:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.946 11:07:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.946 11:07:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.946 11:07:17 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.946 11:07:17 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.946 11:07:17 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.946 11:07:17 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.946 11:07:17 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.946 11:07:17 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.946 11:07:17 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.946 11:07:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.946 11:07:17 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.946 11:07:17 -- scripts/common.sh@344 -- # : 1 00:11:05.946 11:07:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.946 11:07:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.946 11:07:17 -- scripts/common.sh@364 -- # decimal 1 00:11:05.946 11:07:17 -- scripts/common.sh@352 -- # local d=1 00:11:05.946 11:07:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.946 11:07:17 -- scripts/common.sh@354 -- # echo 1 00:11:05.946 11:07:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.946 11:07:17 -- scripts/common.sh@365 -- # decimal 2 00:11:05.946 11:07:17 -- scripts/common.sh@352 -- # local d=2 00:11:05.946 11:07:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.946 11:07:17 -- scripts/common.sh@354 -- # echo 2 00:11:05.946 11:07:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.946 11:07:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.946 11:07:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.946 11:07:17 -- scripts/common.sh@367 -- # return 0 00:11:05.946 11:07:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.946 11:07:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.946 --rc genhtml_branch_coverage=1 00:11:05.946 --rc genhtml_function_coverage=1 00:11:05.946 --rc genhtml_legend=1 00:11:05.946 --rc geninfo_all_blocks=1 00:11:05.946 --rc geninfo_unexecuted_blocks=1 00:11:05.946 00:11:05.946 ' 00:11:05.946 11:07:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.946 --rc genhtml_branch_coverage=1 00:11:05.946 --rc genhtml_function_coverage=1 00:11:05.946 --rc genhtml_legend=1 00:11:05.946 --rc geninfo_all_blocks=1 00:11:05.946 --rc geninfo_unexecuted_blocks=1 00:11:05.946 00:11:05.946 ' 00:11:05.946 11:07:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.946 --rc genhtml_branch_coverage=1 00:11:05.946 --rc genhtml_function_coverage=1 00:11:05.946 --rc genhtml_legend=1 00:11:05.946 --rc geninfo_all_blocks=1 00:11:05.946 --rc geninfo_unexecuted_blocks=1 00:11:05.946 00:11:05.946 ' 00:11:05.946 11:07:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.946 --rc genhtml_branch_coverage=1 00:11:05.946 --rc genhtml_function_coverage=1 00:11:05.946 --rc genhtml_legend=1 00:11:05.946 --rc geninfo_all_blocks=1 00:11:05.946 --rc geninfo_unexecuted_blocks=1 00:11:05.946 00:11:05.946 ' 00:11:05.946 11:07:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.946 11:07:17 -- nvmf/common.sh@7 -- # uname -s 00:11:05.946 11:07:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.946 11:07:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.946 11:07:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.946 11:07:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.946 11:07:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.946 11:07:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.946 11:07:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.946 11:07:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.946 11:07:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.946 11:07:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.946 11:07:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:05.946 11:07:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:05.946 11:07:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.946 11:07:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.946 11:07:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.946 11:07:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.946 11:07:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.946 11:07:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.946 11:07:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.946 11:07:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.946 11:07:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.946 11:07:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.946 11:07:17 -- paths/export.sh@5 -- # export PATH 00:11:05.946 11:07:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.946 11:07:17 -- nvmf/common.sh@46 -- # : 0 00:11:05.946 11:07:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:05.946 11:07:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:05.946 11:07:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:05.946 11:07:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.946 11:07:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.946 11:07:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:05.946 11:07:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:05.946 11:07:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:05.946 11:07:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.946 11:07:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.946 11:07:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:05.946 11:07:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:05.946 11:07:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.946 11:07:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:05.946 11:07:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:05.946 11:07:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:06.206 11:07:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.206 11:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.206 11:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.206 11:07:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:06.206 11:07:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:06.206 11:07:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:06.206 11:07:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:06.206 11:07:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:06.206 11:07:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:06.206 11:07:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.206 11:07:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.206 11:07:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:06.206 11:07:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:06.206 11:07:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.206 11:07:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.206 11:07:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.206 11:07:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.206 11:07:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.206 11:07:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.206 11:07:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.206 11:07:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.206 11:07:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:06.206 11:07:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:06.206 Cannot find device "nvmf_tgt_br" 00:11:06.206 11:07:17 -- nvmf/common.sh@154 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.206 Cannot find device "nvmf_tgt_br2" 00:11:06.206 11:07:17 -- nvmf/common.sh@155 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:06.206 11:07:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:06.206 Cannot find device "nvmf_tgt_br" 00:11:06.206 11:07:17 -- nvmf/common.sh@157 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:06.206 Cannot find device "nvmf_tgt_br2" 00:11:06.206 11:07:17 -- nvmf/common.sh@158 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:06.206 11:07:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:06.206 11:07:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.206 11:07:17 -- nvmf/common.sh@161 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.206 11:07:17 -- nvmf/common.sh@162 -- # true 00:11:06.206 11:07:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.206 11:07:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.206 11:07:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.206 11:07:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.206 11:07:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:06.206 11:07:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:06.206 11:07:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:06.206 11:07:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:06.206 11:07:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:06.206 11:07:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:06.206 11:07:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:06.206 11:07:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:06.206 11:07:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:06.206 11:07:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:06.206 11:07:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:06.465 11:07:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.465 11:07:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:06.465 11:07:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:06.465 11:07:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.465 11:07:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.465 11:07:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.465 11:07:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.465 11:07:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.465 11:07:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:06.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:11:06.465 00:11:06.465 --- 10.0.0.2 ping statistics --- 00:11:06.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.465 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:06.465 11:07:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:06.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:06.465 00:11:06.465 --- 10.0.0.3 ping statistics --- 00:11:06.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.465 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:06.465 11:07:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:06.465 00:11:06.465 --- 10.0.0.1 ping statistics --- 00:11:06.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.465 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:06.465 11:07:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.465 11:07:17 -- nvmf/common.sh@421 -- # return 0 00:11:06.465 11:07:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:06.465 11:07:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.465 11:07:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:06.465 11:07:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:06.465 11:07:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.465 11:07:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:06.465 11:07:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:06.465 11:07:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:06.465 11:07:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:06.465 11:07:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.465 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.465 11:07:17 -- nvmf/common.sh@469 -- # nvmfpid=76117 00:11:06.465 11:07:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:06.465 11:07:17 -- nvmf/common.sh@470 -- # waitforlisten 76117 00:11:06.465 11:07:17 -- common/autotest_common.sh@829 -- # '[' -z 76117 ']' 00:11:06.465 11:07:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.465 11:07:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.465 11:07:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.465 11:07:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.465 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.465 [2024-12-06 11:07:17.503509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.465 [2024-12-06 11:07:17.503650] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.725 [2024-12-06 11:07:17.637698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.725 [2024-12-06 11:07:17.670074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.725 [2024-12-06 11:07:17.670235] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.725 [2024-12-06 11:07:17.670249] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.725 [2024-12-06 11:07:17.670257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.725 [2024-12-06 11:07:17.670431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:06.725 [2024-12-06 11:07:17.670917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:06.725 [2024-12-06 11:07:17.671121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.725 [2024-12-06 11:07:17.671123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:06.725 11:07:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.725 11:07:17 -- common/autotest_common.sh@862 -- # return 0 00:11:06.725 11:07:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.725 11:07:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 11:07:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.725 11:07:17 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.725 11:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 [2024-12-06 11:07:17.803787] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.725 11:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.725 11:07:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.725 11:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 Malloc0 00:11:06.725 11:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.725 11:07:17 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.725 11:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 11:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.725 11:07:17 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.725 11:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 11:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.725 11:07:17 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.725 11:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.725 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:11:06.725 [2024-12-06 11:07:17.866329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.983 11:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.983 11:07:17 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:06.983 11:07:17 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:06.983 11:07:17 -- nvmf/common.sh@520 -- # config=() 00:11:06.983 11:07:17 -- nvmf/common.sh@520 -- # local subsystem config 00:11:06.983 11:07:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:06.983 11:07:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:06.983 { 00:11:06.984 "params": { 00:11:06.984 "name": "Nvme$subsystem", 00:11:06.984 "trtype": "$TEST_TRANSPORT", 00:11:06.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.984 "adrfam": "ipv4", 00:11:06.984 "trsvcid": "$NVMF_PORT", 00:11:06.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.984 "hdgst": ${hdgst:-false}, 00:11:06.984 "ddgst": ${ddgst:-false} 00:11:06.984 }, 00:11:06.984 "method": "bdev_nvme_attach_controller" 00:11:06.984 } 00:11:06.984 EOF 00:11:06.984 )") 00:11:06.984 11:07:17 -- nvmf/common.sh@542 -- # cat 00:11:06.984 11:07:17 -- nvmf/common.sh@544 -- # jq . 00:11:06.984 11:07:17 -- nvmf/common.sh@545 -- # IFS=, 00:11:06.984 11:07:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:06.984 "params": { 00:11:06.984 "name": "Nvme1", 00:11:06.984 "trtype": "tcp", 00:11:06.984 "traddr": "10.0.0.2", 00:11:06.984 "adrfam": "ipv4", 00:11:06.984 "trsvcid": "4420", 00:11:06.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.984 "hdgst": false, 00:11:06.984 "ddgst": false 00:11:06.984 }, 00:11:06.984 "method": "bdev_nvme_attach_controller" 00:11:06.984 }' 00:11:06.984 [2024-12-06 11:07:17.916245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.984 [2024-12-06 11:07:17.916320] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76146 ] 00:11:06.984 [2024-12-06 11:07:18.056338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.984 [2024-12-06 11:07:18.091733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.984 [2024-12-06 11:07:18.091875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.984 [2024-12-06 11:07:18.091880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.243 [2024-12-06 11:07:18.216975] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:07.243 [2024-12-06 11:07:18.217033] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:07.243 I/O targets: 00:11:07.243 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:07.243 00:11:07.243 00:11:07.243 CUnit - A unit testing framework for C - Version 2.1-3 00:11:07.243 http://cunit.sourceforge.net/ 00:11:07.243 00:11:07.243 00:11:07.243 Suite: bdevio tests on: Nvme1n1 00:11:07.243 Test: blockdev write read block ...passed 00:11:07.243 Test: blockdev write zeroes read block ...passed 00:11:07.243 Test: blockdev write zeroes read no split ...passed 00:11:07.243 Test: blockdev write zeroes read split ...passed 00:11:07.243 Test: blockdev write zeroes read split partial ...passed 00:11:07.243 Test: blockdev reset ...[2024-12-06 11:07:18.249278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:07.243 [2024-12-06 11:07:18.249367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1cea0 (9): Bad file descriptor 00:11:07.243 [2024-12-06 11:07:18.266588] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:07.243 passed 00:11:07.243 Test: blockdev write read 8 blocks ...passed 00:11:07.243 Test: blockdev write read size > 128k ...passed 00:11:07.243 Test: blockdev write read invalid size ...passed 00:11:07.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:07.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:07.243 Test: blockdev write read max offset ...passed 00:11:07.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:07.243 Test: blockdev writev readv 8 blocks ...passed 00:11:07.243 Test: blockdev writev readv 30 x 1block ...passed 00:11:07.243 Test: blockdev writev readv block ...passed 00:11:07.243 Test: blockdev writev readv size > 128k ...passed 00:11:07.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:07.243 Test: blockdev comparev and writev ...[2024-12-06 11:07:18.274431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.274501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.274527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.274561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.274885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.274912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.274934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.274946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.275290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.275317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.275337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.275350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:07.243 passed 00:11:07.243 Test: blockdev nvme passthru rw ...[2024-12-06 11:07:18.275688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.275726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.275747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.243 [2024-12-06 11:07:18.275759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:07.243 passed 00:11:07.243 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:07:18.276596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.243 [2024-12-06 11:07:18.276630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:07.243 passed 00:11:07.243 Test: blockdev nvme admin passthru ...[2024-12-06 11:07:18.276754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.243 [2024-12-06 11:07:18.276780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.276895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.243 [2024-12-06 11:07:18.276914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:07.243 [2024-12-06 11:07:18.277038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.243 [2024-12-06 11:07:18.277062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:07.243 passed 00:11:07.243 Test: blockdev copy ...passed 00:11:07.243 00:11:07.243 Run Summary: Type Total Ran Passed Failed Inactive 00:11:07.243 suites 1 1 n/a 0 0 00:11:07.243 tests 23 23 23 0 0 00:11:07.243 asserts 152 152 152 0 n/a 00:11:07.243 00:11:07.243 Elapsed time = 0.147 seconds 00:11:07.502 11:07:18 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.502 11:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.502 11:07:18 -- common/autotest_common.sh@10 -- # set +x 00:11:07.502 11:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.502 11:07:18 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:07.502 11:07:18 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:07.502 11:07:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:07.502 11:07:18 -- nvmf/common.sh@116 -- # sync 00:11:07.502 11:07:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:07.502 11:07:18 -- nvmf/common.sh@119 -- # set +e 00:11:07.502 11:07:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:07.502 11:07:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:07.502 rmmod nvme_tcp 00:11:07.502 rmmod nvme_fabrics 00:11:07.502 rmmod nvme_keyring 00:11:07.502 11:07:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:07.502 11:07:18 -- nvmf/common.sh@123 -- # set -e 00:11:07.502 11:07:18 -- nvmf/common.sh@124 -- # return 0 00:11:07.502 11:07:18 -- nvmf/common.sh@477 -- # '[' -n 76117 ']' 00:11:07.502 11:07:18 -- nvmf/common.sh@478 -- # killprocess 76117 00:11:07.502 11:07:18 -- common/autotest_common.sh@936 -- # '[' -z 76117 ']' 00:11:07.502 11:07:18 -- common/autotest_common.sh@940 -- # kill -0 76117 00:11:07.502 11:07:18 -- common/autotest_common.sh@941 -- # uname 00:11:07.502 11:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.502 11:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76117 00:11:07.502 11:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:07.502 11:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:07.502 11:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76117' 00:11:07.502 killing process with pid 76117 00:11:07.502 11:07:18 -- common/autotest_common.sh@955 -- # kill 76117 00:11:07.502 11:07:18 -- common/autotest_common.sh@960 -- # wait 76117 00:11:07.761 11:07:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:07.761 11:07:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:07.761 11:07:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:07.761 11:07:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.761 11:07:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:07.761 11:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.761 11:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.761 11:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.761 11:07:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:07.761 00:11:07.761 real 0m1.880s 00:11:07.761 user 0m5.135s 00:11:07.761 sys 0m0.621s 00:11:07.761 11:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.761 11:07:18 -- common/autotest_common.sh@10 -- # set +x 00:11:07.761 ************************************ 00:11:07.761 END TEST nvmf_bdevio 00:11:07.761 ************************************ 00:11:07.761 11:07:18 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:07.761 11:07:18 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:07.761 11:07:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:07.761 11:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.761 11:07:18 -- common/autotest_common.sh@10 -- # set +x 00:11:07.761 ************************************ 00:11:07.761 START TEST nvmf_bdevio_no_huge 00:11:07.761 ************************************ 00:11:07.761 11:07:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:07.761 * Looking for test storage... 00:11:07.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.761 11:07:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:07.761 11:07:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:07.761 11:07:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.021 11:07:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.021 11:07:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.021 11:07:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.021 11:07:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.021 11:07:18 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.021 11:07:18 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.021 11:07:18 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.021 11:07:18 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.021 11:07:18 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.021 11:07:18 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.021 11:07:18 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.021 11:07:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.021 11:07:18 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.021 11:07:18 -- scripts/common.sh@344 -- # : 1 00:11:08.021 11:07:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.021 11:07:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.021 11:07:18 -- scripts/common.sh@364 -- # decimal 1 00:11:08.021 11:07:18 -- scripts/common.sh@352 -- # local d=1 00:11:08.021 11:07:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.021 11:07:18 -- scripts/common.sh@354 -- # echo 1 00:11:08.021 11:07:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:08.021 11:07:18 -- scripts/common.sh@365 -- # decimal 2 00:11:08.021 11:07:18 -- scripts/common.sh@352 -- # local d=2 00:11:08.021 11:07:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.021 11:07:18 -- scripts/common.sh@354 -- # echo 2 00:11:08.021 11:07:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:08.021 11:07:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:08.021 11:07:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:08.021 11:07:18 -- scripts/common.sh@367 -- # return 0 00:11:08.021 11:07:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.021 11:07:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:08.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.021 --rc genhtml_branch_coverage=1 00:11:08.021 --rc genhtml_function_coverage=1 00:11:08.021 --rc genhtml_legend=1 00:11:08.021 --rc geninfo_all_blocks=1 00:11:08.021 --rc geninfo_unexecuted_blocks=1 00:11:08.021 00:11:08.021 ' 00:11:08.021 11:07:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:08.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.021 --rc genhtml_branch_coverage=1 00:11:08.021 --rc genhtml_function_coverage=1 00:11:08.021 --rc genhtml_legend=1 00:11:08.021 --rc geninfo_all_blocks=1 00:11:08.021 --rc geninfo_unexecuted_blocks=1 00:11:08.021 00:11:08.021 ' 00:11:08.021 11:07:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:08.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.021 --rc genhtml_branch_coverage=1 00:11:08.021 --rc genhtml_function_coverage=1 00:11:08.021 --rc genhtml_legend=1 00:11:08.021 --rc geninfo_all_blocks=1 00:11:08.021 --rc geninfo_unexecuted_blocks=1 00:11:08.021 00:11:08.021 ' 00:11:08.021 11:07:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:08.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.021 --rc genhtml_branch_coverage=1 00:11:08.021 --rc genhtml_function_coverage=1 00:11:08.021 --rc genhtml_legend=1 00:11:08.021 --rc geninfo_all_blocks=1 00:11:08.021 --rc geninfo_unexecuted_blocks=1 00:11:08.021 00:11:08.021 ' 00:11:08.021 11:07:18 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.021 11:07:18 -- nvmf/common.sh@7 -- # uname -s 00:11:08.021 11:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.021 11:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.021 11:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.021 11:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.021 11:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.021 11:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.021 11:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.021 11:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.021 11:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.021 11:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.021 11:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:08.021 11:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:08.021 11:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.021 11:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.021 11:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.021 11:07:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.021 11:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.021 11:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.021 11:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.021 11:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.021 11:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.022 11:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.022 11:07:19 -- paths/export.sh@5 -- # export PATH 00:11:08.022 11:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.022 11:07:19 -- nvmf/common.sh@46 -- # : 0 00:11:08.022 11:07:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:08.022 11:07:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:08.022 11:07:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:08.022 11:07:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.022 11:07:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.022 11:07:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:08.022 11:07:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:08.022 11:07:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:08.022 11:07:19 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.022 11:07:19 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.022 11:07:19 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:08.022 11:07:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:08.022 11:07:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.022 11:07:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:08.022 11:07:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:08.022 11:07:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:08.022 11:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.022 11:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.022 11:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.022 11:07:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:08.022 11:07:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:08.022 11:07:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:08.022 11:07:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:08.022 11:07:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:08.022 11:07:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:08.022 11:07:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.022 11:07:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.022 11:07:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:08.022 11:07:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:08.022 11:07:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.022 11:07:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.022 11:07:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.022 11:07:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.022 11:07:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.022 11:07:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.022 11:07:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.022 11:07:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.022 11:07:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:08.022 11:07:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:08.022 Cannot find device "nvmf_tgt_br" 00:11:08.022 11:07:19 -- nvmf/common.sh@154 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.022 Cannot find device "nvmf_tgt_br2" 00:11:08.022 11:07:19 -- nvmf/common.sh@155 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:08.022 11:07:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:08.022 Cannot find device "nvmf_tgt_br" 00:11:08.022 11:07:19 -- nvmf/common.sh@157 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:08.022 Cannot find device "nvmf_tgt_br2" 00:11:08.022 11:07:19 -- nvmf/common.sh@158 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:08.022 11:07:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:08.022 11:07:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.022 11:07:19 -- nvmf/common.sh@161 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.022 11:07:19 -- nvmf/common.sh@162 -- # true 00:11:08.022 11:07:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.022 11:07:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.022 11:07:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.022 11:07:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.281 11:07:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.281 11:07:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.281 11:07:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.281 11:07:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.281 11:07:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.281 11:07:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:08.281 11:07:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:08.281 11:07:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:08.281 11:07:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:08.281 11:07:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.281 11:07:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.281 11:07:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.281 11:07:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:08.281 11:07:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:08.281 11:07:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.281 11:07:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.281 11:07:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.281 11:07:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.281 11:07:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.281 11:07:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:08.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:08.281 00:11:08.281 --- 10.0.0.2 ping statistics --- 00:11:08.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.282 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:08.282 11:07:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:08.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:08.282 00:11:08.282 --- 10.0.0.3 ping statistics --- 00:11:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.282 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:08.282 11:07:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:08.282 00:11:08.282 --- 10.0.0.1 ping statistics --- 00:11:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.282 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:08.282 11:07:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.282 11:07:19 -- nvmf/common.sh@421 -- # return 0 00:11:08.282 11:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:08.282 11:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.282 11:07:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:08.282 11:07:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:08.282 11:07:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.282 11:07:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:08.282 11:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:08.282 11:07:19 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:08.282 11:07:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:08.282 11:07:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.282 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:08.282 11:07:19 -- nvmf/common.sh@469 -- # nvmfpid=76326 00:11:08.282 11:07:19 -- nvmf/common.sh@470 -- # waitforlisten 76326 00:11:08.282 11:07:19 -- common/autotest_common.sh@829 -- # '[' -z 76326 ']' 00:11:08.282 11:07:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:08.282 11:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.282 11:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.282 11:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.282 11:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.282 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:08.282 [2024-12-06 11:07:19.392623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.282 [2024-12-06 11:07:19.392724] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:08.540 [2024-12-06 11:07:19.543039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.541 [2024-12-06 11:07:19.625856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.541 [2024-12-06 11:07:19.626022] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.541 [2024-12-06 11:07:19.626034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.541 [2024-12-06 11:07:19.626042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.541 [2024-12-06 11:07:19.626207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.541 [2024-12-06 11:07:19.626882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:08.541 [2024-12-06 11:07:19.627139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:08.541 [2024-12-06 11:07:19.627143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.476 11:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.476 11:07:20 -- common/autotest_common.sh@862 -- # return 0 00:11:09.476 11:07:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:09.476 11:07:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 11:07:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.476 11:07:20 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.476 11:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 [2024-12-06 11:07:20.392929] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.476 11:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 11:07:20 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.476 11:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 Malloc0 00:11:09.476 11:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 11:07:20 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.476 11:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 11:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 11:07:20 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.476 11:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 11:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 11:07:20 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.476 11:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.476 11:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:09.476 [2024-12-06 11:07:20.433107] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.476 11:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.476 11:07:20 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:09.476 11:07:20 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.476 11:07:20 -- nvmf/common.sh@520 -- # config=() 00:11:09.476 11:07:20 -- nvmf/common.sh@520 -- # local subsystem config 00:11:09.476 11:07:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:09.476 11:07:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:09.476 { 00:11:09.476 "params": { 00:11:09.476 "name": "Nvme$subsystem", 00:11:09.476 "trtype": "$TEST_TRANSPORT", 00:11:09.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.476 "adrfam": "ipv4", 00:11:09.476 "trsvcid": "$NVMF_PORT", 00:11:09.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.476 "hdgst": ${hdgst:-false}, 00:11:09.476 "ddgst": ${ddgst:-false} 00:11:09.476 }, 00:11:09.476 "method": "bdev_nvme_attach_controller" 00:11:09.476 } 00:11:09.476 EOF 00:11:09.476 )") 00:11:09.476 11:07:20 -- nvmf/common.sh@542 -- # cat 00:11:09.476 11:07:20 -- nvmf/common.sh@544 -- # jq . 00:11:09.476 11:07:20 -- nvmf/common.sh@545 -- # IFS=, 00:11:09.476 11:07:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:09.476 "params": { 00:11:09.476 "name": "Nvme1", 00:11:09.476 "trtype": "tcp", 00:11:09.476 "traddr": "10.0.0.2", 00:11:09.476 "adrfam": "ipv4", 00:11:09.476 "trsvcid": "4420", 00:11:09.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.476 "hdgst": false, 00:11:09.476 "ddgst": false 00:11:09.476 }, 00:11:09.476 "method": "bdev_nvme_attach_controller" 00:11:09.476 }' 00:11:09.476 [2024-12-06 11:07:20.488913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:09.476 [2024-12-06 11:07:20.489005] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76362 ] 00:11:09.746 [2024-12-06 11:07:20.629647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.746 [2024-12-06 11:07:20.740912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.746 [2024-12-06 11:07:20.741021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.746 [2024-12-06 11:07:20.741028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.016 [2024-12-06 11:07:20.905437] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:10.016 [2024-12-06 11:07:20.905477] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:10.016 I/O targets: 00:11:10.016 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:10.016 00:11:10.016 00:11:10.016 CUnit - A unit testing framework for C - Version 2.1-3 00:11:10.016 http://cunit.sourceforge.net/ 00:11:10.016 00:11:10.016 00:11:10.016 Suite: bdevio tests on: Nvme1n1 00:11:10.016 Test: blockdev write read block ...passed 00:11:10.016 Test: blockdev write zeroes read block ...passed 00:11:10.016 Test: blockdev write zeroes read no split ...passed 00:11:10.016 Test: blockdev write zeroes read split ...passed 00:11:10.016 Test: blockdev write zeroes read split partial ...passed 00:11:10.016 Test: blockdev reset ...[2024-12-06 11:07:20.946495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:10.016 [2024-12-06 11:07:20.946596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90260 (9): Bad file descriptor 00:11:10.016 [2024-12-06 11:07:20.964784] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:10.016 passed 00:11:10.016 Test: blockdev write read 8 blocks ...passed 00:11:10.016 Test: blockdev write read size > 128k ...passed 00:11:10.016 Test: blockdev write read invalid size ...passed 00:11:10.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.016 Test: blockdev write read max offset ...passed 00:11:10.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.016 Test: blockdev writev readv 8 blocks ...passed 00:11:10.016 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.016 Test: blockdev writev readv block ...passed 00:11:10.016 Test: blockdev writev readv size > 128k ...passed 00:11:10.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.016 Test: blockdev comparev and writev ...[2024-12-06 11:07:20.973012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.016 [2024-12-06 11:07:20.973064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:10.016 [2024-12-06 11:07:20.973088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.016 [2024-12-06 11:07:20.973107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:10.016 [2024-12-06 11:07:20.973435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.016 [2024-12-06 11:07:20.973485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:10.016 [2024-12-06 11:07:20.973503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.016 [2024-12-06 11:07:20.973524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:10.016 [2024-12-06 11:07:20.973831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.016 [2024-12-06 11:07:20.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:10.017 [2024-12-06 11:07:20.973871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.017 [2024-12-06 11:07:20.973881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:10.017 [2024-12-06 11:07:20.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.017 [2024-12-06 11:07:20.974290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:10.017 [2024-12-06 11:07:20.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.017 [2024-12-06 11:07:20.974319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:10.017 passed 00:11:10.017 Test: blockdev nvme passthru rw ...passed 00:11:10.017 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:07:20.975122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.017 [2024-12-06 11:07:20.975150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:10.017 [2024-12-06 11:07:20.975278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.017 [2024-12-06 11:07:20.975294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:10.017 passed 00:11:10.017 Test: blockdev nvme admin passthru ...[2024-12-06 11:07:20.975404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.017 [2024-12-06 11:07:20.975425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:10.017 [2024-12-06 11:07:20.975531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.017 [2024-12-06 11:07:20.975561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:10.017 passed 00:11:10.017 Test: blockdev copy ...passed 00:11:10.017 00:11:10.017 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.017 suites 1 1 n/a 0 0 00:11:10.017 tests 23 23 23 0 0 00:11:10.017 asserts 152 152 152 0 n/a 00:11:10.017 00:11:10.017 Elapsed time = 0.163 seconds 00:11:10.275 11:07:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.275 11:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.275 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:11:10.275 11:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.275 11:07:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:10.275 11:07:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:10.275 11:07:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:10.275 11:07:21 -- nvmf/common.sh@116 -- # sync 00:11:10.275 11:07:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:10.275 11:07:21 -- nvmf/common.sh@119 -- # set +e 00:11:10.275 11:07:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:10.275 11:07:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:10.275 rmmod nvme_tcp 00:11:10.275 rmmod nvme_fabrics 00:11:10.275 rmmod nvme_keyring 00:11:10.275 11:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:10.275 11:07:21 -- nvmf/common.sh@123 -- # set -e 00:11:10.275 11:07:21 -- nvmf/common.sh@124 -- # return 0 00:11:10.275 11:07:21 -- nvmf/common.sh@477 -- # '[' -n 76326 ']' 00:11:10.275 11:07:21 -- nvmf/common.sh@478 -- # killprocess 76326 00:11:10.275 11:07:21 -- common/autotest_common.sh@936 -- # '[' -z 76326 ']' 00:11:10.275 11:07:21 -- common/autotest_common.sh@940 -- # kill -0 76326 00:11:10.275 11:07:21 -- common/autotest_common.sh@941 -- # uname 00:11:10.275 11:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.275 11:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76326 00:11:10.275 11:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:10.275 11:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:10.275 killing process with pid 76326 00:11:10.275 11:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76326' 00:11:10.275 11:07:21 -- common/autotest_common.sh@955 -- # kill 76326 00:11:10.275 11:07:21 -- common/autotest_common.sh@960 -- # wait 76326 00:11:10.842 11:07:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:10.842 11:07:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:10.842 11:07:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:10.842 11:07:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.842 11:07:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:10.842 11:07:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.842 11:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.842 11:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.842 11:07:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:10.842 00:11:10.842 real 0m2.945s 00:11:10.842 user 0m9.447s 00:11:10.842 sys 0m1.189s 00:11:10.842 11:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.842 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:11:10.842 ************************************ 00:11:10.842 END TEST nvmf_bdevio_no_huge 00:11:10.842 ************************************ 00:11:10.842 11:07:21 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:10.842 11:07:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.842 11:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.842 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:11:10.842 ************************************ 00:11:10.842 START TEST nvmf_tls 00:11:10.842 ************************************ 00:11:10.842 11:07:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:10.842 * Looking for test storage... 00:11:10.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.842 11:07:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.842 11:07:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.842 11:07:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:11.100 11:07:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:11.100 11:07:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:11.100 11:07:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:11.100 11:07:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:11.100 11:07:21 -- scripts/common.sh@335 -- # IFS=.-: 00:11:11.100 11:07:21 -- scripts/common.sh@335 -- # read -ra ver1 00:11:11.100 11:07:21 -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.100 11:07:21 -- scripts/common.sh@336 -- # read -ra ver2 00:11:11.100 11:07:21 -- scripts/common.sh@337 -- # local 'op=<' 00:11:11.100 11:07:21 -- scripts/common.sh@339 -- # ver1_l=2 00:11:11.100 11:07:21 -- scripts/common.sh@340 -- # ver2_l=1 00:11:11.101 11:07:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:11.101 11:07:21 -- scripts/common.sh@343 -- # case "$op" in 00:11:11.101 11:07:21 -- scripts/common.sh@344 -- # : 1 00:11:11.101 11:07:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:11.101 11:07:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.101 11:07:21 -- scripts/common.sh@364 -- # decimal 1 00:11:11.101 11:07:21 -- scripts/common.sh@352 -- # local d=1 00:11:11.101 11:07:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.101 11:07:21 -- scripts/common.sh@354 -- # echo 1 00:11:11.101 11:07:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:11.101 11:07:21 -- scripts/common.sh@365 -- # decimal 2 00:11:11.101 11:07:22 -- scripts/common.sh@352 -- # local d=2 00:11:11.101 11:07:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.101 11:07:22 -- scripts/common.sh@354 -- # echo 2 00:11:11.101 11:07:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:11.101 11:07:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:11.101 11:07:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:11.101 11:07:22 -- scripts/common.sh@367 -- # return 0 00:11:11.101 11:07:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.101 11:07:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:11.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.101 --rc genhtml_branch_coverage=1 00:11:11.101 --rc genhtml_function_coverage=1 00:11:11.101 --rc genhtml_legend=1 00:11:11.101 --rc geninfo_all_blocks=1 00:11:11.101 --rc geninfo_unexecuted_blocks=1 00:11:11.101 00:11:11.101 ' 00:11:11.101 11:07:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:11.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.101 --rc genhtml_branch_coverage=1 00:11:11.101 --rc genhtml_function_coverage=1 00:11:11.101 --rc genhtml_legend=1 00:11:11.101 --rc geninfo_all_blocks=1 00:11:11.101 --rc geninfo_unexecuted_blocks=1 00:11:11.101 00:11:11.101 ' 00:11:11.101 11:07:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:11.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.101 --rc genhtml_branch_coverage=1 00:11:11.101 --rc genhtml_function_coverage=1 00:11:11.101 --rc genhtml_legend=1 00:11:11.101 --rc geninfo_all_blocks=1 00:11:11.101 --rc geninfo_unexecuted_blocks=1 00:11:11.101 00:11:11.101 ' 00:11:11.101 11:07:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:11.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.101 --rc genhtml_branch_coverage=1 00:11:11.101 --rc genhtml_function_coverage=1 00:11:11.101 --rc genhtml_legend=1 00:11:11.101 --rc geninfo_all_blocks=1 00:11:11.101 --rc geninfo_unexecuted_blocks=1 00:11:11.101 00:11:11.101 ' 00:11:11.101 11:07:22 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.101 11:07:22 -- nvmf/common.sh@7 -- # uname -s 00:11:11.101 11:07:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.101 11:07:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.101 11:07:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.101 11:07:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.101 11:07:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.101 11:07:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.101 11:07:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.101 11:07:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.101 11:07:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.101 11:07:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:11.101 11:07:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:11:11.101 11:07:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.101 11:07:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.101 11:07:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.101 11:07:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.101 11:07:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.101 11:07:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.101 11:07:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.101 11:07:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.101 11:07:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.101 11:07:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.101 11:07:22 -- paths/export.sh@5 -- # export PATH 00:11:11.101 11:07:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.101 11:07:22 -- nvmf/common.sh@46 -- # : 0 00:11:11.101 11:07:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:11.101 11:07:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:11.101 11:07:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:11.101 11:07:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.101 11:07:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.101 11:07:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:11.101 11:07:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:11.101 11:07:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:11.101 11:07:22 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.101 11:07:22 -- target/tls.sh@71 -- # nvmftestinit 00:11:11.101 11:07:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:11.101 11:07:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.101 11:07:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:11.101 11:07:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:11.101 11:07:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:11.101 11:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.101 11:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.101 11:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.101 11:07:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:11.101 11:07:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:11.101 11:07:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.101 11:07:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.101 11:07:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.101 11:07:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:11.101 11:07:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.101 11:07:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.101 11:07:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.101 11:07:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.101 11:07:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.101 11:07:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.101 11:07:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.101 11:07:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.101 11:07:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:11.101 11:07:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:11.101 Cannot find device "nvmf_tgt_br" 00:11:11.101 11:07:22 -- nvmf/common.sh@154 -- # true 00:11:11.101 11:07:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.101 Cannot find device "nvmf_tgt_br2" 00:11:11.101 11:07:22 -- nvmf/common.sh@155 -- # true 00:11:11.101 11:07:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:11.101 11:07:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:11.101 Cannot find device "nvmf_tgt_br" 00:11:11.101 11:07:22 -- nvmf/common.sh@157 -- # true 00:11:11.101 11:07:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:11.101 Cannot find device "nvmf_tgt_br2" 00:11:11.101 11:07:22 -- nvmf/common.sh@158 -- # true 00:11:11.102 11:07:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:11.102 11:07:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:11.102 11:07:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.102 11:07:22 -- nvmf/common.sh@161 -- # true 00:11:11.102 11:07:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.102 11:07:22 -- nvmf/common.sh@162 -- # true 00:11:11.102 11:07:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.102 11:07:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.102 11:07:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.102 11:07:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.102 11:07:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.102 11:07:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.102 11:07:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.102 11:07:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.102 11:07:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.102 11:07:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:11.102 11:07:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:11.359 11:07:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:11.359 11:07:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:11.359 11:07:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.359 11:07:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.359 11:07:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.359 11:07:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:11.359 11:07:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:11.359 11:07:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.359 11:07:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.359 11:07:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.359 11:07:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.359 11:07:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.359 11:07:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:11.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:11.359 00:11:11.359 --- 10.0.0.2 ping statistics --- 00:11:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.359 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:11.359 11:07:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:11.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:11.359 00:11:11.359 --- 10.0.0.3 ping statistics --- 00:11:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.359 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:11.359 11:07:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:11.359 00:11:11.359 --- 10.0.0.1 ping statistics --- 00:11:11.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.359 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:11.359 11:07:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.359 11:07:22 -- nvmf/common.sh@421 -- # return 0 00:11:11.359 11:07:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:11.359 11:07:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.359 11:07:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:11.359 11:07:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:11.359 11:07:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.359 11:07:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:11.359 11:07:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:11.359 11:07:22 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:11.359 11:07:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:11.359 11:07:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.359 11:07:22 -- common/autotest_common.sh@10 -- # set +x 00:11:11.359 11:07:22 -- nvmf/common.sh@469 -- # nvmfpid=76543 00:11:11.359 11:07:22 -- nvmf/common.sh@470 -- # waitforlisten 76543 00:11:11.359 11:07:22 -- common/autotest_common.sh@829 -- # '[' -z 76543 ']' 00:11:11.359 11:07:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:11.359 11:07:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.359 11:07:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.359 11:07:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.359 11:07:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.359 11:07:22 -- common/autotest_common.sh@10 -- # set +x 00:11:11.359 [2024-12-06 11:07:22.414025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.359 [2024-12-06 11:07:22.414107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.617 [2024-12-06 11:07:22.556230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.617 [2024-12-06 11:07:22.594753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:11.617 [2024-12-06 11:07:22.594921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.617 [2024-12-06 11:07:22.594937] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.617 [2024-12-06 11:07:22.594947] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.617 [2024-12-06 11:07:22.594983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.617 11:07:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.617 11:07:22 -- common/autotest_common.sh@862 -- # return 0 00:11:11.617 11:07:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.617 11:07:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.617 11:07:22 -- common/autotest_common.sh@10 -- # set +x 00:11:11.617 11:07:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.617 11:07:22 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:11.617 11:07:22 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:11.875 true 00:11:11.875 11:07:22 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:11.875 11:07:22 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:12.132 11:07:23 -- target/tls.sh@82 -- # version=0 00:11:12.132 11:07:23 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:12.132 11:07:23 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:12.699 11:07:23 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:12.699 11:07:23 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:12.958 11:07:23 -- target/tls.sh@90 -- # version=13 00:11:12.958 11:07:23 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:12.958 11:07:23 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:13.216 11:07:24 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:13.216 11:07:24 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.216 11:07:24 -- target/tls.sh@98 -- # version=7 00:11:13.216 11:07:24 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:13.216 11:07:24 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.216 11:07:24 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:13.783 11:07:24 -- target/tls.sh@105 -- # ktls=false 00:11:13.783 11:07:24 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:13.783 11:07:24 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:13.783 11:07:24 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.783 11:07:24 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:14.349 11:07:25 -- target/tls.sh@113 -- # ktls=true 00:11:14.349 11:07:25 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:14.349 11:07:25 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:14.349 11:07:25 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:14.349 11:07:25 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:14.606 11:07:25 -- target/tls.sh@121 -- # ktls=false 00:11:14.606 11:07:25 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:14.606 11:07:25 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:14.606 11:07:25 -- target/tls.sh@49 -- # local key hash crc 00:11:14.606 11:07:25 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:14.606 11:07:25 -- target/tls.sh@51 -- # hash=01 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # gzip -1 -c 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # tail -c8 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # head -c 4 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # crc='p$H�' 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.606 11:07:25 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.606 11:07:25 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:14.606 11:07:25 -- target/tls.sh@49 -- # local key hash crc 00:11:14.606 11:07:25 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:14.606 11:07:25 -- target/tls.sh@51 -- # hash=01 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # tail -c8 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # gzip -1 -c 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # head -c 4 00:11:14.606 11:07:25 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:14.606 11:07:25 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.606 11:07:25 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.606 11:07:25 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.606 11:07:25 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:14.606 11:07:25 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.606 11:07:25 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.606 11:07:25 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.606 11:07:25 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:14.606 11:07:25 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:14.864 11:07:25 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:15.122 11:07:26 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:15.122 11:07:26 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:15.122 11:07:26 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:15.379 [2024-12-06 11:07:26.501164] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.379 11:07:26 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:15.637 11:07:26 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:15.895 [2024-12-06 11:07:27.033314] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:15.895 [2024-12-06 11:07:27.033584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.154 11:07:27 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:16.155 malloc0 00:11:16.155 11:07:27 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:16.414 11:07:27 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:16.672 11:07:27 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:28.871 Initializing NVMe Controllers 00:11:28.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:28.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:28.871 Initialization complete. Launching workers. 00:11:28.871 ======================================================== 00:11:28.871 Latency(us) 00:11:28.871 Device Information : IOPS MiB/s Average min max 00:11:28.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10037.50 39.21 6377.30 1339.86 7846.49 00:11:28.872 ======================================================== 00:11:28.872 Total : 10037.50 39.21 6377.30 1339.86 7846.49 00:11:28.872 00:11:28.872 11:07:37 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:28.872 11:07:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:28.872 11:07:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:28.872 11:07:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:28.872 11:07:37 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:28.872 11:07:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.872 11:07:37 -- target/tls.sh@28 -- # bdevperf_pid=76782 00:11:28.872 11:07:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:28.872 11:07:37 -- target/tls.sh@31 -- # waitforlisten 76782 /var/tmp/bdevperf.sock 00:11:28.872 11:07:37 -- common/autotest_common.sh@829 -- # '[' -z 76782 ']' 00:11:28.872 11:07:37 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:28.872 11:07:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.872 11:07:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.872 11:07:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.872 11:07:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.872 11:07:37 -- common/autotest_common.sh@10 -- # set +x 00:11:28.872 [2024-12-06 11:07:38.003600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.872 [2024-12-06 11:07:38.003736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76782 ] 00:11:28.872 [2024-12-06 11:07:38.146661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.872 [2024-12-06 11:07:38.186207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.872 11:07:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.872 11:07:38 -- common/autotest_common.sh@862 -- # return 0 00:11:28.872 11:07:38 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:28.872 [2024-12-06 11:07:39.219745] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:28.872 TLSTESTn1 00:11:28.872 11:07:39 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:28.872 Running I/O for 10 seconds... 00:11:38.878 00:11:38.878 Latency(us) 00:11:38.878 [2024-12-06T11:07:50.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.878 [2024-12-06T11:07:50.025Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:38.878 Verification LBA range: start 0x0 length 0x2000 00:11:38.878 TLSTESTn1 : 10.01 5825.75 22.76 0.00 0.00 21940.08 3336.38 24307.90 00:11:38.878 [2024-12-06T11:07:50.025Z] =================================================================================================================== 00:11:38.878 [2024-12-06T11:07:50.025Z] Total : 5825.75 22.76 0.00 0.00 21940.08 3336.38 24307.90 00:11:38.878 0 00:11:38.878 11:07:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.878 11:07:49 -- target/tls.sh@45 -- # killprocess 76782 00:11:38.878 11:07:49 -- common/autotest_common.sh@936 -- # '[' -z 76782 ']' 00:11:38.878 11:07:49 -- common/autotest_common.sh@940 -- # kill -0 76782 00:11:38.878 11:07:49 -- common/autotest_common.sh@941 -- # uname 00:11:38.878 11:07:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:38.878 11:07:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76782 00:11:38.878 killing process with pid 76782 00:11:38.878 Received shutdown signal, test time was about 10.000000 seconds 00:11:38.878 00:11:38.878 Latency(us) 00:11:38.878 [2024-12-06T11:07:50.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.878 [2024-12-06T11:07:50.026Z] =================================================================================================================== 00:11:38.879 [2024-12-06T11:07:50.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:38.879 11:07:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:38.879 11:07:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:38.879 11:07:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76782' 00:11:38.879 11:07:49 -- common/autotest_common.sh@955 -- # kill 76782 00:11:38.879 11:07:49 -- common/autotest_common.sh@960 -- # wait 76782 00:11:38.879 11:07:49 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.879 11:07:49 -- common/autotest_common.sh@650 -- # local es=0 00:11:38.879 11:07:49 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.879 11:07:49 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:38.879 11:07:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.879 11:07:49 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:38.879 11:07:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.879 11:07:49 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.879 11:07:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:38.879 11:07:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:38.879 11:07:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:38.879 11:07:49 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:38.879 11:07:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:38.879 11:07:49 -- target/tls.sh@28 -- # bdevperf_pid=76920 00:11:38.879 11:07:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:38.879 11:07:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:38.879 11:07:49 -- target/tls.sh@31 -- # waitforlisten 76920 /var/tmp/bdevperf.sock 00:11:38.879 11:07:49 -- common/autotest_common.sh@829 -- # '[' -z 76920 ']' 00:11:38.879 11:07:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:38.879 11:07:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.879 11:07:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:38.879 11:07:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.879 11:07:49 -- common/autotest_common.sh@10 -- # set +x 00:11:38.879 [2024-12-06 11:07:49.672910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.879 [2024-12-06 11:07:49.673064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76920 ] 00:11:38.879 [2024-12-06 11:07:49.814203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.879 [2024-12-06 11:07:49.848414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.879 11:07:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.879 11:07:49 -- common/autotest_common.sh@862 -- # return 0 00:11:38.879 11:07:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:39.138 [2024-12-06 11:07:50.183033] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:39.138 [2024-12-06 11:07:50.194864] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:39.138 [2024-12-06 11:07:50.195493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d3f90 (107): Transport endpoint is not connected 00:11:39.138 [2024-12-06 11:07:50.196482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d3f90 (9): Bad file descriptor 00:11:39.138 [2024-12-06 11:07:50.197478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:39.138 [2024-12-06 11:07:50.197515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:39.138 [2024-12-06 11:07:50.197541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:39.138 request: 00:11:39.138 { 00:11:39.138 "name": "TLSTEST", 00:11:39.138 "trtype": "tcp", 00:11:39.138 "traddr": "10.0.0.2", 00:11:39.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.138 "adrfam": "ipv4", 00:11:39.138 "trsvcid": "4420", 00:11:39.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.138 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:39.138 "method": "bdev_nvme_attach_controller", 00:11:39.138 "req_id": 1 00:11:39.138 } 00:11:39.138 Got JSON-RPC error response 00:11:39.138 response: 00:11:39.138 { 00:11:39.138 "code": -32602, 00:11:39.138 "message": "Invalid parameters" 00:11:39.138 } 00:11:39.138 11:07:50 -- target/tls.sh@36 -- # killprocess 76920 00:11:39.138 11:07:50 -- common/autotest_common.sh@936 -- # '[' -z 76920 ']' 00:11:39.138 11:07:50 -- common/autotest_common.sh@940 -- # kill -0 76920 00:11:39.138 11:07:50 -- common/autotest_common.sh@941 -- # uname 00:11:39.138 11:07:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.138 11:07:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76920 00:11:39.138 killing process with pid 76920 00:11:39.138 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.138 00:11:39.138 Latency(us) 00:11:39.138 [2024-12-06T11:07:50.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.138 [2024-12-06T11:07:50.285Z] =================================================================================================================== 00:11:39.138 [2024-12-06T11:07:50.285Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:39.138 11:07:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:39.138 11:07:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:39.138 11:07:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76920' 00:11:39.138 11:07:50 -- common/autotest_common.sh@955 -- # kill 76920 00:11:39.138 11:07:50 -- common/autotest_common.sh@960 -- # wait 76920 00:11:39.397 11:07:50 -- target/tls.sh@37 -- # return 1 00:11:39.397 11:07:50 -- common/autotest_common.sh@653 -- # es=1 00:11:39.397 11:07:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.397 11:07:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.397 11:07:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.397 11:07:50 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.397 11:07:50 -- common/autotest_common.sh@650 -- # local es=0 00:11:39.397 11:07:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.397 11:07:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:39.397 11:07:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.397 11:07:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:39.397 11:07:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.397 11:07:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.397 11:07:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:39.397 11:07:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:39.397 11:07:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:39.397 11:07:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:39.397 11:07:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.397 11:07:50 -- target/tls.sh@28 -- # bdevperf_pid=76940 00:11:39.397 11:07:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:39.397 11:07:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:39.397 11:07:50 -- target/tls.sh@31 -- # waitforlisten 76940 /var/tmp/bdevperf.sock 00:11:39.397 11:07:50 -- common/autotest_common.sh@829 -- # '[' -z 76940 ']' 00:11:39.397 11:07:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.397 11:07:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.397 11:07:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.397 11:07:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.397 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:11:39.397 [2024-12-06 11:07:50.421760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:39.397 [2024-12-06 11:07:50.421860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76940 ] 00:11:39.655 [2024-12-06 11:07:50.554512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.655 [2024-12-06 11:07:50.586430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.588 11:07:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.588 11:07:51 -- common/autotest_common.sh@862 -- # return 0 00:11:40.588 11:07:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.588 [2024-12-06 11:07:51.678161] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:40.588 [2024-12-06 11:07:51.685127] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:40.588 [2024-12-06 11:07:51.685181] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:40.588 [2024-12-06 11:07:51.685256] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:40.588 [2024-12-06 11:07:51.685789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399f90 (107): Transport endpoint is not connected 00:11:40.588 [2024-12-06 11:07:51.686764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399f90 (9): Bad file descriptor 00:11:40.588 [2024-12-06 11:07:51.687761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:40.588 [2024-12-06 11:07:51.687788] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:40.588 [2024-12-06 11:07:51.687798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:40.588 request: 00:11:40.588 { 00:11:40.588 "name": "TLSTEST", 00:11:40.588 "trtype": "tcp", 00:11:40.588 "traddr": "10.0.0.2", 00:11:40.588 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:40.588 "adrfam": "ipv4", 00:11:40.588 "trsvcid": "4420", 00:11:40.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.588 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:40.588 "method": "bdev_nvme_attach_controller", 00:11:40.588 "req_id": 1 00:11:40.588 } 00:11:40.588 Got JSON-RPC error response 00:11:40.589 response: 00:11:40.589 { 00:11:40.589 "code": -32602, 00:11:40.589 "message": "Invalid parameters" 00:11:40.589 } 00:11:40.589 11:07:51 -- target/tls.sh@36 -- # killprocess 76940 00:11:40.589 11:07:51 -- common/autotest_common.sh@936 -- # '[' -z 76940 ']' 00:11:40.589 11:07:51 -- common/autotest_common.sh@940 -- # kill -0 76940 00:11:40.589 11:07:51 -- common/autotest_common.sh@941 -- # uname 00:11:40.589 11:07:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.589 11:07:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76940 00:11:40.847 killing process with pid 76940 00:11:40.847 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.847 00:11:40.847 Latency(us) 00:11:40.847 [2024-12-06T11:07:51.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.847 [2024-12-06T11:07:51.994Z] =================================================================================================================== 00:11:40.847 [2024-12-06T11:07:51.994Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.847 11:07:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:40.847 11:07:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:40.847 11:07:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76940' 00:11:40.847 11:07:51 -- common/autotest_common.sh@955 -- # kill 76940 00:11:40.847 11:07:51 -- common/autotest_common.sh@960 -- # wait 76940 00:11:40.847 11:07:51 -- target/tls.sh@37 -- # return 1 00:11:40.847 11:07:51 -- common/autotest_common.sh@653 -- # es=1 00:11:40.847 11:07:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.847 11:07:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.847 11:07:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.847 11:07:51 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.847 11:07:51 -- common/autotest_common.sh@650 -- # local es=0 00:11:40.847 11:07:51 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.847 11:07:51 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:40.847 11:07:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.847 11:07:51 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:40.847 11:07:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.847 11:07:51 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.847 11:07:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:40.847 11:07:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:40.847 11:07:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:40.847 11:07:51 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:40.847 11:07:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:40.848 11:07:51 -- target/tls.sh@28 -- # bdevperf_pid=76962 00:11:40.848 11:07:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:40.848 11:07:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:40.848 11:07:51 -- target/tls.sh@31 -- # waitforlisten 76962 /var/tmp/bdevperf.sock 00:11:40.848 11:07:51 -- common/autotest_common.sh@829 -- # '[' -z 76962 ']' 00:11:40.848 11:07:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:40.848 11:07:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.848 11:07:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:40.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:40.848 11:07:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.848 11:07:51 -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 [2024-12-06 11:07:51.931906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:40.848 [2024-12-06 11:07:51.932563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76962 ] 00:11:41.106 [2024-12-06 11:07:52.072626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.106 [2024-12-06 11:07:52.105939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.106 11:07:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.106 11:07:52 -- common/autotest_common.sh@862 -- # return 0 00:11:41.106 11:07:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:41.365 [2024-12-06 11:07:52.424704] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:41.365 [2024-12-06 11:07:52.435265] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:41.365 [2024-12-06 11:07:52.435307] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:41.365 [2024-12-06 11:07:52.435361] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:41.365 [2024-12-06 11:07:52.436101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1986f90 (107): Transport endpoint is not connected 00:11:41.365 [2024-12-06 11:07:52.437089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1986f90 (9): Bad file descriptor 00:11:41.365 [2024-12-06 11:07:52.438085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:41.365 [2024-12-06 11:07:52.438121] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:41.365 [2024-12-06 11:07:52.438147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:41.365 request: 00:11:41.365 { 00:11:41.365 "name": "TLSTEST", 00:11:41.365 "trtype": "tcp", 00:11:41.365 "traddr": "10.0.0.2", 00:11:41.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:41.365 "adrfam": "ipv4", 00:11:41.365 "trsvcid": "4420", 00:11:41.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.365 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:41.365 "method": "bdev_nvme_attach_controller", 00:11:41.365 "req_id": 1 00:11:41.365 } 00:11:41.365 Got JSON-RPC error response 00:11:41.365 response: 00:11:41.365 { 00:11:41.365 "code": -32602, 00:11:41.365 "message": "Invalid parameters" 00:11:41.365 } 00:11:41.365 11:07:52 -- target/tls.sh@36 -- # killprocess 76962 00:11:41.365 11:07:52 -- common/autotest_common.sh@936 -- # '[' -z 76962 ']' 00:11:41.365 11:07:52 -- common/autotest_common.sh@940 -- # kill -0 76962 00:11:41.365 11:07:52 -- common/autotest_common.sh@941 -- # uname 00:11:41.365 11:07:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.365 11:07:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76962 00:11:41.365 11:07:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:41.365 11:07:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:41.365 killing process with pid 76962 00:11:41.365 11:07:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76962' 00:11:41.365 11:07:52 -- common/autotest_common.sh@955 -- # kill 76962 00:11:41.365 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.365 00:11:41.365 Latency(us) 00:11:41.365 [2024-12-06T11:07:52.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.365 [2024-12-06T11:07:52.512Z] =================================================================================================================== 00:11:41.365 [2024-12-06T11:07:52.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:41.365 11:07:52 -- common/autotest_common.sh@960 -- # wait 76962 00:11:41.623 11:07:52 -- target/tls.sh@37 -- # return 1 00:11:41.623 11:07:52 -- common/autotest_common.sh@653 -- # es=1 00:11:41.623 11:07:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.623 11:07:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.623 11:07:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.623 11:07:52 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.623 11:07:52 -- common/autotest_common.sh@650 -- # local es=0 00:11:41.623 11:07:52 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.623 11:07:52 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:41.623 11:07:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.623 11:07:52 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:41.623 11:07:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.623 11:07:52 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.623 11:07:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:41.623 11:07:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:41.623 11:07:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:41.623 11:07:52 -- target/tls.sh@23 -- # psk= 00:11:41.623 11:07:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:41.624 11:07:52 -- target/tls.sh@28 -- # bdevperf_pid=76982 00:11:41.624 11:07:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:41.624 11:07:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:41.624 11:07:52 -- target/tls.sh@31 -- # waitforlisten 76982 /var/tmp/bdevperf.sock 00:11:41.624 11:07:52 -- common/autotest_common.sh@829 -- # '[' -z 76982 ']' 00:11:41.624 11:07:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.624 11:07:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.624 11:07:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.624 11:07:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.624 11:07:52 -- common/autotest_common.sh@10 -- # set +x 00:11:41.624 [2024-12-06 11:07:52.679831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:41.624 [2024-12-06 11:07:52.679923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76982 ] 00:11:41.882 [2024-12-06 11:07:52.815984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.882 [2024-12-06 11:07:52.847978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.817 11:07:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.817 11:07:53 -- common/autotest_common.sh@862 -- # return 0 00:11:42.817 11:07:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:42.817 [2024-12-06 11:07:53.905376] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:42.817 [2024-12-06 11:07:53.907130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5cc20 (9): Bad file descriptor 00:11:42.817 [2024-12-06 11:07:53.908125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:42.817 [2024-12-06 11:07:53.908162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:42.817 [2024-12-06 11:07:53.908188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:42.817 request: 00:11:42.817 { 00:11:42.817 "name": "TLSTEST", 00:11:42.817 "trtype": "tcp", 00:11:42.817 "traddr": "10.0.0.2", 00:11:42.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.817 "adrfam": "ipv4", 00:11:42.817 "trsvcid": "4420", 00:11:42.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.817 "method": "bdev_nvme_attach_controller", 00:11:42.817 "req_id": 1 00:11:42.817 } 00:11:42.817 Got JSON-RPC error response 00:11:42.817 response: 00:11:42.817 { 00:11:42.817 "code": -32602, 00:11:42.817 "message": "Invalid parameters" 00:11:42.817 } 00:11:42.817 11:07:53 -- target/tls.sh@36 -- # killprocess 76982 00:11:42.817 11:07:53 -- common/autotest_common.sh@936 -- # '[' -z 76982 ']' 00:11:42.817 11:07:53 -- common/autotest_common.sh@940 -- # kill -0 76982 00:11:42.817 11:07:53 -- common/autotest_common.sh@941 -- # uname 00:11:42.817 11:07:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.817 11:07:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76982 00:11:43.075 11:07:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:43.075 11:07:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:43.075 killing process with pid 76982 00:11:43.075 11:07:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76982' 00:11:43.075 11:07:53 -- common/autotest_common.sh@955 -- # kill 76982 00:11:43.075 Received shutdown signal, test time was about 10.000000 seconds 00:11:43.075 00:11:43.075 Latency(us) 00:11:43.075 [2024-12-06T11:07:54.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.075 [2024-12-06T11:07:54.222Z] =================================================================================================================== 00:11:43.075 [2024-12-06T11:07:54.222Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:43.075 11:07:53 -- common/autotest_common.sh@960 -- # wait 76982 00:11:43.075 11:07:54 -- target/tls.sh@37 -- # return 1 00:11:43.075 11:07:54 -- common/autotest_common.sh@653 -- # es=1 00:11:43.075 11:07:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.075 11:07:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:43.075 11:07:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.075 11:07:54 -- target/tls.sh@167 -- # killprocess 76543 00:11:43.075 11:07:54 -- common/autotest_common.sh@936 -- # '[' -z 76543 ']' 00:11:43.075 11:07:54 -- common/autotest_common.sh@940 -- # kill -0 76543 00:11:43.075 11:07:54 -- common/autotest_common.sh@941 -- # uname 00:11:43.075 11:07:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:43.076 11:07:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76543 00:11:43.076 11:07:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:43.076 11:07:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:43.076 killing process with pid 76543 00:11:43.076 11:07:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76543' 00:11:43.076 11:07:54 -- common/autotest_common.sh@955 -- # kill 76543 00:11:43.076 11:07:54 -- common/autotest_common.sh@960 -- # wait 76543 00:11:43.334 11:07:54 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:43.334 11:07:54 -- target/tls.sh@49 -- # local key hash crc 00:11:43.334 11:07:54 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:43.334 11:07:54 -- target/tls.sh@51 -- # hash=02 00:11:43.334 11:07:54 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:43.334 11:07:54 -- target/tls.sh@52 -- # gzip -1 -c 00:11:43.334 11:07:54 -- target/tls.sh@52 -- # head -c 4 00:11:43.334 11:07:54 -- target/tls.sh@52 -- # tail -c8 00:11:43.334 11:07:54 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:43.334 11:07:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:43.334 11:07:54 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:43.334 11:07:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.334 11:07:54 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.334 11:07:54 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.334 11:07:54 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.334 11:07:54 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.334 11:07:54 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:43.334 11:07:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:43.334 11:07:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.334 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:11:43.334 11:07:54 -- nvmf/common.sh@469 -- # nvmfpid=77028 00:11:43.334 11:07:54 -- nvmf/common.sh@470 -- # waitforlisten 77028 00:11:43.334 11:07:54 -- common/autotest_common.sh@829 -- # '[' -z 77028 ']' 00:11:43.334 11:07:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:43.334 11:07:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.334 11:07:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.334 11:07:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.334 11:07:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.334 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:11:43.334 [2024-12-06 11:07:54.360515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.334 [2024-12-06 11:07:54.360635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.594 [2024-12-06 11:07:54.500782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.594 [2024-12-06 11:07:54.532369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.594 [2024-12-06 11:07:54.532534] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.594 [2024-12-06 11:07:54.532546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.594 [2024-12-06 11:07:54.532571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.594 [2024-12-06 11:07:54.532601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.594 11:07:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.594 11:07:54 -- common/autotest_common.sh@862 -- # return 0 00:11:43.594 11:07:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.594 11:07:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.594 11:07:54 -- common/autotest_common.sh@10 -- # set +x 00:11:43.594 11:07:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.594 11:07:54 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.594 11:07:54 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.594 11:07:54 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:43.854 [2024-12-06 11:07:54.902987] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.854 11:07:54 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:44.111 11:07:55 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:44.369 [2024-12-06 11:07:55.419088] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:44.369 [2024-12-06 11:07:55.419506] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.369 11:07:55 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:44.627 malloc0 00:11:44.627 11:07:55 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:44.884 11:07:55 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.142 11:07:56 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.142 11:07:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:45.142 11:07:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:45.142 11:07:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:45.142 11:07:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:45.142 11:07:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.142 11:07:56 -- target/tls.sh@28 -- # bdevperf_pid=77066 00:11:45.142 11:07:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:45.142 11:07:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:45.142 11:07:56 -- target/tls.sh@31 -- # waitforlisten 77066 /var/tmp/bdevperf.sock 00:11:45.142 11:07:56 -- common/autotest_common.sh@829 -- # '[' -z 77066 ']' 00:11:45.142 11:07:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:45.142 11:07:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.142 11:07:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:45.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:45.142 11:07:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.142 11:07:56 -- common/autotest_common.sh@10 -- # set +x 00:11:45.142 [2024-12-06 11:07:56.211853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.142 [2024-12-06 11:07:56.212232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77066 ] 00:11:45.401 [2024-12-06 11:07:56.345342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.401 [2024-12-06 11:07:56.383149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.401 11:07:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.401 11:07:56 -- common/autotest_common.sh@862 -- # return 0 00:11:45.401 11:07:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.659 [2024-12-06 11:07:56.750926] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:45.918 TLSTESTn1 00:11:45.918 11:07:56 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:45.918 Running I/O for 10 seconds... 00:11:55.935 00:11:55.935 Latency(us) 00:11:55.935 [2024-12-06T11:08:07.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.935 [2024-12-06T11:08:07.082Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:55.935 Verification LBA range: start 0x0 length 0x2000 00:11:55.935 TLSTESTn1 : 10.01 6193.77 24.19 0.00 0.00 20633.47 4259.84 20494.89 00:11:55.935 [2024-12-06T11:08:07.082Z] =================================================================================================================== 00:11:55.935 [2024-12-06T11:08:07.082Z] Total : 6193.77 24.19 0.00 0.00 20633.47 4259.84 20494.89 00:11:55.935 0 00:11:55.935 11:08:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:55.935 11:08:06 -- target/tls.sh@45 -- # killprocess 77066 00:11:55.935 11:08:06 -- common/autotest_common.sh@936 -- # '[' -z 77066 ']' 00:11:55.935 11:08:06 -- common/autotest_common.sh@940 -- # kill -0 77066 00:11:55.935 11:08:06 -- common/autotest_common.sh@941 -- # uname 00:11:55.935 11:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.935 11:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77066 00:11:55.935 11:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:55.935 11:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:55.935 11:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77066' 00:11:55.935 killing process with pid 77066 00:11:55.935 11:08:07 -- common/autotest_common.sh@955 -- # kill 77066 00:11:55.935 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.935 00:11:55.935 Latency(us) 00:11:55.935 [2024-12-06T11:08:07.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.935 [2024-12-06T11:08:07.082Z] =================================================================================================================== 00:11:55.935 [2024-12-06T11:08:07.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.935 11:08:07 -- common/autotest_common.sh@960 -- # wait 77066 00:11:56.194 11:08:07 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.194 11:08:07 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.194 11:08:07 -- common/autotest_common.sh@650 -- # local es=0 00:11:56.194 11:08:07 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.194 11:08:07 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:56.194 11:08:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.194 11:08:07 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:56.194 11:08:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.194 11:08:07 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.194 11:08:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:56.194 11:08:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:56.194 11:08:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:56.194 11:08:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:56.194 11:08:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.194 11:08:07 -- target/tls.sh@28 -- # bdevperf_pid=77195 00:11:56.194 11:08:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:56.194 11:08:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:56.194 11:08:07 -- target/tls.sh@31 -- # waitforlisten 77195 /var/tmp/bdevperf.sock 00:11:56.194 11:08:07 -- common/autotest_common.sh@829 -- # '[' -z 77195 ']' 00:11:56.194 11:08:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.194 11:08:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.194 11:08:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.194 11:08:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.194 11:08:07 -- common/autotest_common.sh@10 -- # set +x 00:11:56.194 [2024-12-06 11:08:07.184534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:56.195 [2024-12-06 11:08:07.184793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77195 ] 00:11:56.195 [2024-12-06 11:08:07.318638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.454 [2024-12-06 11:08:07.353244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.454 11:08:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.454 11:08:07 -- common/autotest_common.sh@862 -- # return 0 00:11:56.454 11:08:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.713 [2024-12-06 11:08:07.701165] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:56.713 [2024-12-06 11:08:07.701421] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:56.713 request: 00:11:56.713 { 00:11:56.713 "name": "TLSTEST", 00:11:56.713 "trtype": "tcp", 00:11:56.713 "traddr": "10.0.0.2", 00:11:56.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.713 "adrfam": "ipv4", 00:11:56.713 "trsvcid": "4420", 00:11:56.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.713 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:56.713 "method": "bdev_nvme_attach_controller", 00:11:56.713 "req_id": 1 00:11:56.713 } 00:11:56.713 Got JSON-RPC error response 00:11:56.713 response: 00:11:56.713 { 00:11:56.713 "code": -22, 00:11:56.713 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:56.713 } 00:11:56.713 11:08:07 -- target/tls.sh@36 -- # killprocess 77195 00:11:56.713 11:08:07 -- common/autotest_common.sh@936 -- # '[' -z 77195 ']' 00:11:56.713 11:08:07 -- common/autotest_common.sh@940 -- # kill -0 77195 00:11:56.713 11:08:07 -- common/autotest_common.sh@941 -- # uname 00:11:56.713 11:08:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.713 11:08:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77195 00:11:56.713 killing process with pid 77195 00:11:56.713 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.713 00:11:56.713 Latency(us) 00:11:56.713 [2024-12-06T11:08:07.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.713 [2024-12-06T11:08:07.860Z] =================================================================================================================== 00:11:56.713 [2024-12-06T11:08:07.860Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:56.713 11:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:56.713 11:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:56.713 11:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77195' 00:11:56.713 11:08:07 -- common/autotest_common.sh@955 -- # kill 77195 00:11:56.713 11:08:07 -- common/autotest_common.sh@960 -- # wait 77195 00:11:56.973 11:08:07 -- target/tls.sh@37 -- # return 1 00:11:56.973 11:08:07 -- common/autotest_common.sh@653 -- # es=1 00:11:56.973 11:08:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:56.973 11:08:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:56.973 11:08:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:56.973 11:08:07 -- target/tls.sh@183 -- # killprocess 77028 00:11:56.973 11:08:07 -- common/autotest_common.sh@936 -- # '[' -z 77028 ']' 00:11:56.973 11:08:07 -- common/autotest_common.sh@940 -- # kill -0 77028 00:11:56.973 11:08:07 -- common/autotest_common.sh@941 -- # uname 00:11:56.973 11:08:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.973 11:08:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77028 00:11:56.973 killing process with pid 77028 00:11:56.973 11:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:56.973 11:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:56.973 11:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77028' 00:11:56.973 11:08:07 -- common/autotest_common.sh@955 -- # kill 77028 00:11:56.973 11:08:07 -- common/autotest_common.sh@960 -- # wait 77028 00:11:56.973 11:08:08 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:56.973 11:08:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:56.973 11:08:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.973 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:11:56.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.973 11:08:08 -- nvmf/common.sh@469 -- # nvmfpid=77220 00:11:56.973 11:08:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:56.973 11:08:08 -- nvmf/common.sh@470 -- # waitforlisten 77220 00:11:56.973 11:08:08 -- common/autotest_common.sh@829 -- # '[' -z 77220 ']' 00:11:56.973 11:08:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.973 11:08:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.973 11:08:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.973 11:08:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.973 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:11:56.973 [2024-12-06 11:08:08.101041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:56.973 [2024-12-06 11:08:08.101327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.233 [2024-12-06 11:08:08.230785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.233 [2024-12-06 11:08:08.261628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:57.233 [2024-12-06 11:08:08.262054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.233 [2024-12-06 11:08:08.262106] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.233 [2024-12-06 11:08:08.262248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.233 [2024-12-06 11:08:08.262303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.170 11:08:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.170 11:08:09 -- common/autotest_common.sh@862 -- # return 0 00:11:58.170 11:08:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:58.170 11:08:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.170 11:08:09 -- common/autotest_common.sh@10 -- # set +x 00:11:58.170 11:08:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.170 11:08:09 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:58.170 11:08:09 -- common/autotest_common.sh@650 -- # local es=0 00:11:58.170 11:08:09 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:58.170 11:08:09 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:58.170 11:08:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.170 11:08:09 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:58.170 11:08:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.170 11:08:09 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:58.170 11:08:09 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:58.170 11:08:09 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:58.170 [2024-12-06 11:08:09.292391] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.170 11:08:09 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:58.738 11:08:09 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:58.738 [2024-12-06 11:08:09.772574] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:58.738 [2024-12-06 11:08:09.772821] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.738 11:08:09 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:58.997 malloc0 00:11:58.997 11:08:10 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:59.257 11:08:10 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.516 [2024-12-06 11:08:10.414279] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:59.516 [2024-12-06 11:08:10.414325] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:59.516 [2024-12-06 11:08:10.414358] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:59.516 request: 00:11:59.516 { 00:11:59.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.516 "host": "nqn.2016-06.io.spdk:host1", 00:11:59.516 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:59.516 "method": "nvmf_subsystem_add_host", 00:11:59.516 "req_id": 1 00:11:59.516 } 00:11:59.516 Got JSON-RPC error response 00:11:59.516 response: 00:11:59.516 { 00:11:59.516 "code": -32603, 00:11:59.516 "message": "Internal error" 00:11:59.516 } 00:11:59.516 11:08:10 -- common/autotest_common.sh@653 -- # es=1 00:11:59.516 11:08:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:59.516 11:08:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:59.516 11:08:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:59.516 11:08:10 -- target/tls.sh@189 -- # killprocess 77220 00:11:59.516 11:08:10 -- common/autotest_common.sh@936 -- # '[' -z 77220 ']' 00:11:59.516 11:08:10 -- common/autotest_common.sh@940 -- # kill -0 77220 00:11:59.516 11:08:10 -- common/autotest_common.sh@941 -- # uname 00:11:59.516 11:08:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.516 11:08:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77220 00:11:59.516 11:08:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:59.516 killing process with pid 77220 00:11:59.516 11:08:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:59.516 11:08:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77220' 00:11:59.516 11:08:10 -- common/autotest_common.sh@955 -- # kill 77220 00:11:59.516 11:08:10 -- common/autotest_common.sh@960 -- # wait 77220 00:11:59.516 11:08:10 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.516 11:08:10 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:59.516 11:08:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:59.516 11:08:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.516 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:11:59.517 11:08:10 -- nvmf/common.sh@469 -- # nvmfpid=77277 00:11:59.517 11:08:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:59.517 11:08:10 -- nvmf/common.sh@470 -- # waitforlisten 77277 00:11:59.517 11:08:10 -- common/autotest_common.sh@829 -- # '[' -z 77277 ']' 00:11:59.517 11:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.517 11:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.517 11:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.517 11:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.517 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:11:59.776 [2024-12-06 11:08:10.668733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.776 [2024-12-06 11:08:10.669207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.776 [2024-12-06 11:08:10.811761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.776 [2024-12-06 11:08:10.842014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:59.776 [2024-12-06 11:08:10.842156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.776 [2024-12-06 11:08:10.842168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.776 [2024-12-06 11:08:10.842175] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.776 [2024-12-06 11:08:10.842210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.714 11:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.714 11:08:11 -- common/autotest_common.sh@862 -- # return 0 00:12:00.714 11:08:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:00.714 11:08:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.714 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:12:00.714 11:08:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.714 11:08:11 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:00.714 11:08:11 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:00.714 11:08:11 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:00.973 [2024-12-06 11:08:11.871866] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.973 11:08:11 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:00.973 11:08:12 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:01.231 [2024-12-06 11:08:12.372033] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:01.231 [2024-12-06 11:08:12.372273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.489 11:08:12 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:01.489 malloc0 00:12:01.489 11:08:12 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:01.747 11:08:12 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:02.006 11:08:13 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:02.006 11:08:13 -- target/tls.sh@197 -- # bdevperf_pid=77332 00:12:02.006 11:08:13 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:02.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:02.006 11:08:13 -- target/tls.sh@200 -- # waitforlisten 77332 /var/tmp/bdevperf.sock 00:12:02.006 11:08:13 -- common/autotest_common.sh@829 -- # '[' -z 77332 ']' 00:12:02.006 11:08:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:02.006 11:08:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.006 11:08:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:02.006 11:08:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.006 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 [2024-12-06 11:08:13.140392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:02.006 [2024-12-06 11:08:13.140686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77332 ] 00:12:02.264 [2024-12-06 11:08:13.282941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.264 [2024-12-06 11:08:13.322963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.196 11:08:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.196 11:08:14 -- common/autotest_common.sh@862 -- # return 0 00:12:03.196 11:08:14 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.196 [2024-12-06 11:08:14.296207] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:03.455 TLSTESTn1 00:12:03.455 11:08:14 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:03.713 11:08:14 -- target/tls.sh@205 -- # tgtconf='{ 00:12:03.713 "subsystems": [ 00:12:03.713 { 00:12:03.713 "subsystem": "iobuf", 00:12:03.713 "config": [ 00:12:03.713 { 00:12:03.713 "method": "iobuf_set_options", 00:12:03.713 "params": { 00:12:03.713 "small_pool_count": 8192, 00:12:03.713 "large_pool_count": 1024, 00:12:03.713 "small_bufsize": 8192, 00:12:03.713 "large_bufsize": 135168 00:12:03.713 } 00:12:03.713 } 00:12:03.713 ] 00:12:03.713 }, 00:12:03.713 { 00:12:03.713 "subsystem": "sock", 00:12:03.713 "config": [ 00:12:03.713 { 00:12:03.713 "method": "sock_impl_set_options", 00:12:03.713 "params": { 00:12:03.713 "impl_name": "uring", 00:12:03.713 "recv_buf_size": 2097152, 00:12:03.713 "send_buf_size": 2097152, 00:12:03.713 "enable_recv_pipe": true, 00:12:03.713 "enable_quickack": false, 00:12:03.713 "enable_placement_id": 0, 00:12:03.713 "enable_zerocopy_send_server": false, 00:12:03.713 "enable_zerocopy_send_client": false, 00:12:03.713 "zerocopy_threshold": 0, 00:12:03.713 "tls_version": 0, 00:12:03.713 "enable_ktls": false 00:12:03.713 } 00:12:03.713 }, 00:12:03.713 { 00:12:03.713 "method": "sock_impl_set_options", 00:12:03.713 "params": { 00:12:03.713 "impl_name": "posix", 00:12:03.713 "recv_buf_size": 2097152, 00:12:03.713 "send_buf_size": 2097152, 00:12:03.713 "enable_recv_pipe": true, 00:12:03.713 "enable_quickack": false, 00:12:03.713 "enable_placement_id": 0, 00:12:03.713 "enable_zerocopy_send_server": true, 00:12:03.713 "enable_zerocopy_send_client": false, 00:12:03.713 "zerocopy_threshold": 0, 00:12:03.714 "tls_version": 0, 00:12:03.714 "enable_ktls": false 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "sock_impl_set_options", 00:12:03.714 "params": { 00:12:03.714 "impl_name": "ssl", 00:12:03.714 "recv_buf_size": 4096, 00:12:03.714 "send_buf_size": 4096, 00:12:03.714 "enable_recv_pipe": true, 00:12:03.714 "enable_quickack": false, 00:12:03.714 "enable_placement_id": 0, 00:12:03.714 "enable_zerocopy_send_server": true, 00:12:03.714 "enable_zerocopy_send_client": false, 00:12:03.714 "zerocopy_threshold": 0, 00:12:03.714 "tls_version": 0, 00:12:03.714 "enable_ktls": false 00:12:03.714 } 00:12:03.714 } 00:12:03.714 ] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "vmd", 00:12:03.714 "config": [] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "accel", 00:12:03.714 "config": [ 00:12:03.714 { 00:12:03.714 "method": "accel_set_options", 00:12:03.714 "params": { 00:12:03.714 "small_cache_size": 128, 00:12:03.714 "large_cache_size": 16, 00:12:03.714 "task_count": 2048, 00:12:03.714 "sequence_count": 2048, 00:12:03.714 "buf_count": 2048 00:12:03.714 } 00:12:03.714 } 00:12:03.714 ] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "bdev", 00:12:03.714 "config": [ 00:12:03.714 { 00:12:03.714 "method": "bdev_set_options", 00:12:03.714 "params": { 00:12:03.714 "bdev_io_pool_size": 65535, 00:12:03.714 "bdev_io_cache_size": 256, 00:12:03.714 "bdev_auto_examine": true, 00:12:03.714 "iobuf_small_cache_size": 128, 00:12:03.714 "iobuf_large_cache_size": 16 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_raid_set_options", 00:12:03.714 "params": { 00:12:03.714 "process_window_size_kb": 1024 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_iscsi_set_options", 00:12:03.714 "params": { 00:12:03.714 "timeout_sec": 30 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_nvme_set_options", 00:12:03.714 "params": { 00:12:03.714 "action_on_timeout": "none", 00:12:03.714 "timeout_us": 0, 00:12:03.714 "timeout_admin_us": 0, 00:12:03.714 "keep_alive_timeout_ms": 10000, 00:12:03.714 "transport_retry_count": 4, 00:12:03.714 "arbitration_burst": 0, 00:12:03.714 "low_priority_weight": 0, 00:12:03.714 "medium_priority_weight": 0, 00:12:03.714 "high_priority_weight": 0, 00:12:03.714 "nvme_adminq_poll_period_us": 10000, 00:12:03.714 "nvme_ioq_poll_period_us": 0, 00:12:03.714 "io_queue_requests": 0, 00:12:03.714 "delay_cmd_submit": true, 00:12:03.714 "bdev_retry_count": 3, 00:12:03.714 "transport_ack_timeout": 0, 00:12:03.714 "ctrlr_loss_timeout_sec": 0, 00:12:03.714 "reconnect_delay_sec": 0, 00:12:03.714 "fast_io_fail_timeout_sec": 0, 00:12:03.714 "generate_uuids": false, 00:12:03.714 "transport_tos": 0, 00:12:03.714 "io_path_stat": false, 00:12:03.714 "allow_accel_sequence": false 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_nvme_set_hotplug", 00:12:03.714 "params": { 00:12:03.714 "period_us": 100000, 00:12:03.714 "enable": false 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_malloc_create", 00:12:03.714 "params": { 00:12:03.714 "name": "malloc0", 00:12:03.714 "num_blocks": 8192, 00:12:03.714 "block_size": 4096, 00:12:03.714 "physical_block_size": 4096, 00:12:03.714 "uuid": "c9c15c88-0c8d-4f02-afb4-146e1c1510d7", 00:12:03.714 "optimal_io_boundary": 0 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "bdev_wait_for_examine" 00:12:03.714 } 00:12:03.714 ] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "nbd", 00:12:03.714 "config": [] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "scheduler", 00:12:03.714 "config": [ 00:12:03.714 { 00:12:03.714 "method": "framework_set_scheduler", 00:12:03.714 "params": { 00:12:03.714 "name": "static" 00:12:03.714 } 00:12:03.714 } 00:12:03.714 ] 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "subsystem": "nvmf", 00:12:03.714 "config": [ 00:12:03.714 { 00:12:03.714 "method": "nvmf_set_config", 00:12:03.714 "params": { 00:12:03.714 "discovery_filter": "match_any", 00:12:03.714 "admin_cmd_passthru": { 00:12:03.714 "identify_ctrlr": false 00:12:03.714 } 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_set_max_subsystems", 00:12:03.714 "params": { 00:12:03.714 "max_subsystems": 1024 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_set_crdt", 00:12:03.714 "params": { 00:12:03.714 "crdt1": 0, 00:12:03.714 "crdt2": 0, 00:12:03.714 "crdt3": 0 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_create_transport", 00:12:03.714 "params": { 00:12:03.714 "trtype": "TCP", 00:12:03.714 "max_queue_depth": 128, 00:12:03.714 "max_io_qpairs_per_ctrlr": 127, 00:12:03.714 "in_capsule_data_size": 4096, 00:12:03.714 "max_io_size": 131072, 00:12:03.714 "io_unit_size": 131072, 00:12:03.714 "max_aq_depth": 128, 00:12:03.714 "num_shared_buffers": 511, 00:12:03.714 "buf_cache_size": 4294967295, 00:12:03.714 "dif_insert_or_strip": false, 00:12:03.714 "zcopy": false, 00:12:03.714 "c2h_success": false, 00:12:03.714 "sock_priority": 0, 00:12:03.714 "abort_timeout_sec": 1 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_create_subsystem", 00:12:03.714 "params": { 00:12:03.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.714 "allow_any_host": false, 00:12:03.714 "serial_number": "SPDK00000000000001", 00:12:03.714 "model_number": "SPDK bdev Controller", 00:12:03.714 "max_namespaces": 10, 00:12:03.714 "min_cntlid": 1, 00:12:03.714 "max_cntlid": 65519, 00:12:03.714 "ana_reporting": false 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_subsystem_add_host", 00:12:03.714 "params": { 00:12:03.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.714 "host": "nqn.2016-06.io.spdk:host1", 00:12:03.714 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:03.714 } 00:12:03.714 }, 00:12:03.714 { 00:12:03.714 "method": "nvmf_subsystem_add_ns", 00:12:03.714 "params": { 00:12:03.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.714 "namespace": { 00:12:03.715 "nsid": 1, 00:12:03.715 "bdev_name": "malloc0", 00:12:03.715 "nguid": "C9C15C880C8D4F02AFB4146E1C1510D7", 00:12:03.715 "uuid": "c9c15c88-0c8d-4f02-afb4-146e1c1510d7" 00:12:03.715 } 00:12:03.715 } 00:12:03.715 }, 00:12:03.715 { 00:12:03.715 "method": "nvmf_subsystem_add_listener", 00:12:03.715 "params": { 00:12:03.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.715 "listen_address": { 00:12:03.715 "trtype": "TCP", 00:12:03.715 "adrfam": "IPv4", 00:12:03.715 "traddr": "10.0.0.2", 00:12:03.715 "trsvcid": "4420" 00:12:03.715 }, 00:12:03.715 "secure_channel": true 00:12:03.715 } 00:12:03.715 } 00:12:03.715 ] 00:12:03.715 } 00:12:03.715 ] 00:12:03.715 }' 00:12:03.715 11:08:14 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:03.974 11:08:14 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:03.974 "subsystems": [ 00:12:03.974 { 00:12:03.974 "subsystem": "iobuf", 00:12:03.974 "config": [ 00:12:03.974 { 00:12:03.974 "method": "iobuf_set_options", 00:12:03.974 "params": { 00:12:03.974 "small_pool_count": 8192, 00:12:03.974 "large_pool_count": 1024, 00:12:03.974 "small_bufsize": 8192, 00:12:03.974 "large_bufsize": 135168 00:12:03.974 } 00:12:03.974 } 00:12:03.974 ] 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "subsystem": "sock", 00:12:03.974 "config": [ 00:12:03.974 { 00:12:03.974 "method": "sock_impl_set_options", 00:12:03.974 "params": { 00:12:03.974 "impl_name": "uring", 00:12:03.974 "recv_buf_size": 2097152, 00:12:03.974 "send_buf_size": 2097152, 00:12:03.974 "enable_recv_pipe": true, 00:12:03.974 "enable_quickack": false, 00:12:03.974 "enable_placement_id": 0, 00:12:03.974 "enable_zerocopy_send_server": false, 00:12:03.974 "enable_zerocopy_send_client": false, 00:12:03.974 "zerocopy_threshold": 0, 00:12:03.974 "tls_version": 0, 00:12:03.974 "enable_ktls": false 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "sock_impl_set_options", 00:12:03.974 "params": { 00:12:03.974 "impl_name": "posix", 00:12:03.974 "recv_buf_size": 2097152, 00:12:03.974 "send_buf_size": 2097152, 00:12:03.974 "enable_recv_pipe": true, 00:12:03.974 "enable_quickack": false, 00:12:03.974 "enable_placement_id": 0, 00:12:03.974 "enable_zerocopy_send_server": true, 00:12:03.974 "enable_zerocopy_send_client": false, 00:12:03.974 "zerocopy_threshold": 0, 00:12:03.974 "tls_version": 0, 00:12:03.974 "enable_ktls": false 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "sock_impl_set_options", 00:12:03.974 "params": { 00:12:03.974 "impl_name": "ssl", 00:12:03.974 "recv_buf_size": 4096, 00:12:03.974 "send_buf_size": 4096, 00:12:03.974 "enable_recv_pipe": true, 00:12:03.974 "enable_quickack": false, 00:12:03.974 "enable_placement_id": 0, 00:12:03.974 "enable_zerocopy_send_server": true, 00:12:03.974 "enable_zerocopy_send_client": false, 00:12:03.974 "zerocopy_threshold": 0, 00:12:03.974 "tls_version": 0, 00:12:03.974 "enable_ktls": false 00:12:03.974 } 00:12:03.974 } 00:12:03.974 ] 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "subsystem": "vmd", 00:12:03.974 "config": [] 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "subsystem": "accel", 00:12:03.974 "config": [ 00:12:03.974 { 00:12:03.974 "method": "accel_set_options", 00:12:03.974 "params": { 00:12:03.974 "small_cache_size": 128, 00:12:03.974 "large_cache_size": 16, 00:12:03.974 "task_count": 2048, 00:12:03.974 "sequence_count": 2048, 00:12:03.974 "buf_count": 2048 00:12:03.974 } 00:12:03.974 } 00:12:03.974 ] 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "subsystem": "bdev", 00:12:03.974 "config": [ 00:12:03.974 { 00:12:03.974 "method": "bdev_set_options", 00:12:03.974 "params": { 00:12:03.974 "bdev_io_pool_size": 65535, 00:12:03.974 "bdev_io_cache_size": 256, 00:12:03.974 "bdev_auto_examine": true, 00:12:03.974 "iobuf_small_cache_size": 128, 00:12:03.974 "iobuf_large_cache_size": 16 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_raid_set_options", 00:12:03.974 "params": { 00:12:03.974 "process_window_size_kb": 1024 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_iscsi_set_options", 00:12:03.974 "params": { 00:12:03.974 "timeout_sec": 30 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_nvme_set_options", 00:12:03.974 "params": { 00:12:03.974 "action_on_timeout": "none", 00:12:03.974 "timeout_us": 0, 00:12:03.974 "timeout_admin_us": 0, 00:12:03.974 "keep_alive_timeout_ms": 10000, 00:12:03.974 "transport_retry_count": 4, 00:12:03.974 "arbitration_burst": 0, 00:12:03.974 "low_priority_weight": 0, 00:12:03.974 "medium_priority_weight": 0, 00:12:03.974 "high_priority_weight": 0, 00:12:03.974 "nvme_adminq_poll_period_us": 10000, 00:12:03.974 "nvme_ioq_poll_period_us": 0, 00:12:03.974 "io_queue_requests": 512, 00:12:03.974 "delay_cmd_submit": true, 00:12:03.974 "bdev_retry_count": 3, 00:12:03.974 "transport_ack_timeout": 0, 00:12:03.974 "ctrlr_loss_timeout_sec": 0, 00:12:03.974 "reconnect_delay_sec": 0, 00:12:03.974 "fast_io_fail_timeout_sec": 0, 00:12:03.974 "generate_uuids": false, 00:12:03.974 "transport_tos": 0, 00:12:03.974 "io_path_stat": false, 00:12:03.974 "allow_accel_sequence": false 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_nvme_attach_controller", 00:12:03.974 "params": { 00:12:03.974 "name": "TLSTEST", 00:12:03.974 "trtype": "TCP", 00:12:03.974 "adrfam": "IPv4", 00:12:03.974 "traddr": "10.0.0.2", 00:12:03.974 "trsvcid": "4420", 00:12:03.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.974 "prchk_reftag": false, 00:12:03.974 "prchk_guard": false, 00:12:03.974 "ctrlr_loss_timeout_sec": 0, 00:12:03.974 "reconnect_delay_sec": 0, 00:12:03.974 "fast_io_fail_timeout_sec": 0, 00:12:03.974 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:03.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.974 "hdgst": false, 00:12:03.974 "ddgst": false 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_nvme_set_hotplug", 00:12:03.974 "params": { 00:12:03.974 "period_us": 100000, 00:12:03.974 "enable": false 00:12:03.974 } 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "method": "bdev_wait_for_examine" 00:12:03.974 } 00:12:03.974 ] 00:12:03.974 }, 00:12:03.974 { 00:12:03.974 "subsystem": "nbd", 00:12:03.974 "config": [] 00:12:03.974 } 00:12:03.974 ] 00:12:03.974 }' 00:12:03.974 11:08:14 -- target/tls.sh@208 -- # killprocess 77332 00:12:03.974 11:08:14 -- common/autotest_common.sh@936 -- # '[' -z 77332 ']' 00:12:03.974 11:08:14 -- common/autotest_common.sh@940 -- # kill -0 77332 00:12:03.974 11:08:14 -- common/autotest_common.sh@941 -- # uname 00:12:03.974 11:08:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.974 11:08:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77332 00:12:03.974 killing process with pid 77332 00:12:03.974 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.974 00:12:03.974 Latency(us) 00:12:03.974 [2024-12-06T11:08:15.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.974 [2024-12-06T11:08:15.121Z] =================================================================================================================== 00:12:03.974 [2024-12-06T11:08:15.121Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:03.974 11:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:03.975 11:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:03.975 11:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77332' 00:12:03.975 11:08:15 -- common/autotest_common.sh@955 -- # kill 77332 00:12:03.975 11:08:15 -- common/autotest_common.sh@960 -- # wait 77332 00:12:04.234 11:08:15 -- target/tls.sh@209 -- # killprocess 77277 00:12:04.234 11:08:15 -- common/autotest_common.sh@936 -- # '[' -z 77277 ']' 00:12:04.234 11:08:15 -- common/autotest_common.sh@940 -- # kill -0 77277 00:12:04.234 11:08:15 -- common/autotest_common.sh@941 -- # uname 00:12:04.234 11:08:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.234 11:08:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77277 00:12:04.234 killing process with pid 77277 00:12:04.234 11:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:04.234 11:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:04.234 11:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77277' 00:12:04.234 11:08:15 -- common/autotest_common.sh@955 -- # kill 77277 00:12:04.234 11:08:15 -- common/autotest_common.sh@960 -- # wait 77277 00:12:04.234 11:08:15 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:04.234 11:08:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:04.234 11:08:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.234 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:12:04.234 11:08:15 -- target/tls.sh@212 -- # echo '{ 00:12:04.234 "subsystems": [ 00:12:04.234 { 00:12:04.234 "subsystem": "iobuf", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "iobuf_set_options", 00:12:04.234 "params": { 00:12:04.234 "small_pool_count": 8192, 00:12:04.234 "large_pool_count": 1024, 00:12:04.234 "small_bufsize": 8192, 00:12:04.234 "large_bufsize": 135168 00:12:04.234 } 00:12:04.234 } 00:12:04.234 ] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "sock", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "sock_impl_set_options", 00:12:04.234 "params": { 00:12:04.234 "impl_name": "uring", 00:12:04.234 "recv_buf_size": 2097152, 00:12:04.234 "send_buf_size": 2097152, 00:12:04.234 "enable_recv_pipe": true, 00:12:04.234 "enable_quickack": false, 00:12:04.234 "enable_placement_id": 0, 00:12:04.234 "enable_zerocopy_send_server": false, 00:12:04.234 "enable_zerocopy_send_client": false, 00:12:04.234 "zerocopy_threshold": 0, 00:12:04.234 "tls_version": 0, 00:12:04.234 "enable_ktls": false 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "sock_impl_set_options", 00:12:04.234 "params": { 00:12:04.234 "impl_name": "posix", 00:12:04.234 "recv_buf_size": 2097152, 00:12:04.234 "send_buf_size": 2097152, 00:12:04.234 "enable_recv_pipe": true, 00:12:04.234 "enable_quickack": false, 00:12:04.234 "enable_placement_id": 0, 00:12:04.234 "enable_zerocopy_send_server": true, 00:12:04.234 "enable_zerocopy_send_client": false, 00:12:04.234 "zerocopy_threshold": 0, 00:12:04.234 "tls_version": 0, 00:12:04.234 "enable_ktls": false 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "sock_impl_set_options", 00:12:04.234 "params": { 00:12:04.234 "impl_name": "ssl", 00:12:04.234 "recv_buf_size": 4096, 00:12:04.234 "send_buf_size": 4096, 00:12:04.234 "enable_recv_pipe": true, 00:12:04.234 "enable_quickack": false, 00:12:04.234 "enable_placement_id": 0, 00:12:04.234 "enable_zerocopy_send_server": true, 00:12:04.234 "enable_zerocopy_send_client": false, 00:12:04.234 "zerocopy_threshold": 0, 00:12:04.234 "tls_version": 0, 00:12:04.234 "enable_ktls": false 00:12:04.234 } 00:12:04.234 } 00:12:04.234 ] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "vmd", 00:12:04.234 "config": [] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "accel", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "accel_set_options", 00:12:04.234 "params": { 00:12:04.234 "small_cache_size": 128, 00:12:04.234 "large_cache_size": 16, 00:12:04.234 "task_count": 2048, 00:12:04.234 "sequence_count": 2048, 00:12:04.234 "buf_count": 2048 00:12:04.234 } 00:12:04.234 } 00:12:04.234 ] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "bdev", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "bdev_set_options", 00:12:04.234 "params": { 00:12:04.234 "bdev_io_pool_size": 65535, 00:12:04.234 "bdev_io_cache_size": 256, 00:12:04.234 "bdev_auto_examine": true, 00:12:04.234 "iobuf_small_cache_size": 128, 00:12:04.234 "iobuf_large_cache_size": 16 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_raid_set_options", 00:12:04.234 "params": { 00:12:04.234 "process_window_size_kb": 1024 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_iscsi_set_options", 00:12:04.234 "params": { 00:12:04.234 "timeout_sec": 30 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_nvme_set_options", 00:12:04.234 "params": { 00:12:04.234 "action_on_timeout": "none", 00:12:04.234 "timeout_us": 0, 00:12:04.234 "timeout_admin_us": 0, 00:12:04.234 "keep_alive_timeout_ms": 10000, 00:12:04.234 "transport_retry_count": 4, 00:12:04.234 "arbitration_burst": 0, 00:12:04.234 "low_priority_weight": 0, 00:12:04.234 "medium_priority_weight": 0, 00:12:04.234 "high_priority_weight": 0, 00:12:04.234 "nvme_adminq_poll_period_us": 10000, 00:12:04.234 "nvme_ioq_poll_period_us": 0, 00:12:04.234 "io_queue_requests": 0, 00:12:04.234 "delay_cmd_submit": true, 00:12:04.234 "bdev_retry_count": 3, 00:12:04.234 "transport_ack_timeout": 0, 00:12:04.234 "ctrlr_loss_timeout_sec": 0, 00:12:04.234 "reconnect_delay_sec": 0, 00:12:04.234 "fast_io_fail_timeout_sec": 0, 00:12:04.234 "generate_uuids": false, 00:12:04.234 "transport_tos": 0, 00:12:04.234 "io_path_stat": false, 00:12:04.234 "allow_accel_sequence": false 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_nvme_set_hotplug", 00:12:04.234 "params": { 00:12:04.234 "period_us": 100000, 00:12:04.234 "enable": false 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_malloc_create", 00:12:04.234 "params": { 00:12:04.234 "name": "malloc0", 00:12:04.234 "num_blocks": 8192, 00:12:04.234 "block_size": 4096, 00:12:04.234 "physical_block_size": 4096, 00:12:04.234 "uuid": "c9c15c88-0c8d-4f02-afb4-146e1c1510d7", 00:12:04.234 "optimal_io_boundary": 0 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "bdev_wait_for_examine" 00:12:04.234 } 00:12:04.234 ] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "nbd", 00:12:04.234 "config": [] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "scheduler", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "framework_set_scheduler", 00:12:04.234 "params": { 00:12:04.234 "name": "static" 00:12:04.234 } 00:12:04.234 } 00:12:04.234 ] 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "subsystem": "nvmf", 00:12:04.234 "config": [ 00:12:04.234 { 00:12:04.234 "method": "nvmf_set_config", 00:12:04.234 "params": { 00:12:04.234 "discovery_filter": "match_any", 00:12:04.234 "admin_cmd_passthru": { 00:12:04.234 "identify_ctrlr": false 00:12:04.234 } 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "nvmf_set_max_subsystems", 00:12:04.234 "params": { 00:12:04.234 "max_subsystems": 1024 00:12:04.234 } 00:12:04.234 }, 00:12:04.234 { 00:12:04.234 "method": "nvmf_set_crdt", 00:12:04.234 "params": { 00:12:04.235 "crdt1": 0, 00:12:04.235 "crdt2": 0, 00:12:04.235 "crdt3": 0 00:12:04.235 } 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "method": "nvmf_create_transport", 00:12:04.235 "params": { 00:12:04.235 "trtype": "TCP", 00:12:04.235 "max_queue_depth": 128, 00:12:04.235 "max_io_qpairs_per_ctrlr": 127, 00:12:04.235 "in_capsule_data_size": 4096, 00:12:04.235 "max_io_size": 131072, 00:12:04.235 "io_unit_size": 131072, 00:12:04.235 "max_aq_depth": 128, 00:12:04.235 "num_shared_buffers": 511, 00:12:04.235 "buf_cache_size": 4294967295, 00:12:04.235 "dif_insert_or_strip": false, 00:12:04.235 "zcopy": false, 00:12:04.235 "c2h_success": false, 00:12:04.235 "sock_priority": 0, 00:12:04.235 "abort_timeout_sec": 1 00:12:04.235 } 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "method": "nvmf_create_subsystem", 00:12:04.235 "params": { 00:12:04.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.235 "allow_any_host": false, 00:12:04.235 "serial_number": "SPDK00000000000001", 00:12:04.235 "model_number": "SPDK bdev Controller", 00:12:04.235 "max_namespaces": 10, 00:12:04.235 "min_cntlid": 1, 00:12:04.235 "max_cntlid": 65519, 00:12:04.235 "ana_reporting": false 00:12:04.235 } 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "method": "nvmf_subsystem_add_host", 00:12:04.235 "params": { 00:12:04.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.235 "host": "nqn.2016-06.io.spdk:host1", 00:12:04.235 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:04.235 } 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "method": "nvmf_subsystem_add_ns", 00:12:04.235 "params": { 00:12:04.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.235 "namespace": { 00:12:04.235 "nsid": 1, 00:12:04.235 "bdev_name": "malloc0", 00:12:04.235 "nguid": "C9C15C880C8D4F02AFB4146E1C1510D7", 00:12:04.235 "uuid": "c9c15c88-0c8d-4f02-afb4-146e1c1510d7" 00:12:04.235 } 00:12:04.235 } 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "method": "nvmf_subsystem_add_listener", 00:12:04.235 "params": { 00:12:04.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.235 "listen_address": { 00:12:04.235 "trtype": "TCP", 00:12:04.235 "adrfam": "IPv4", 00:12:04.235 "traddr": "10.0.0.2", 00:12:04.235 "trsvcid": "4420" 00:12:04.235 }, 00:12:04.235 "secure_channel": true 00:12:04.235 } 00:12:04.235 } 00:12:04.235 ] 00:12:04.235 } 00:12:04.235 ] 00:12:04.235 }' 00:12:04.235 11:08:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:04.235 11:08:15 -- nvmf/common.sh@469 -- # nvmfpid=77379 00:12:04.235 11:08:15 -- nvmf/common.sh@470 -- # waitforlisten 77379 00:12:04.235 11:08:15 -- common/autotest_common.sh@829 -- # '[' -z 77379 ']' 00:12:04.235 11:08:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.235 11:08:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.235 11:08:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.235 11:08:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.235 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:12:04.492 [2024-12-06 11:08:15.381687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:04.492 [2024-12-06 11:08:15.381780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.492 [2024-12-06 11:08:15.513323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.492 [2024-12-06 11:08:15.543705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:04.492 [2024-12-06 11:08:15.543850] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.492 [2024-12-06 11:08:15.543861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.492 [2024-12-06 11:08:15.543869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.492 [2024-12-06 11:08:15.543896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.750 [2024-12-06 11:08:15.718438] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.750 [2024-12-06 11:08:15.750399] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:04.750 [2024-12-06 11:08:15.750619] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.315 11:08:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.315 11:08:16 -- common/autotest_common.sh@862 -- # return 0 00:12:05.315 11:08:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:05.315 11:08:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.315 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:05.315 11:08:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.315 11:08:16 -- target/tls.sh@216 -- # bdevperf_pid=77407 00:12:05.315 11:08:16 -- target/tls.sh@217 -- # waitforlisten 77407 /var/tmp/bdevperf.sock 00:12:05.315 11:08:16 -- common/autotest_common.sh@829 -- # '[' -z 77407 ']' 00:12:05.315 11:08:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:05.315 11:08:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.315 11:08:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:05.315 11:08:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.315 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 11:08:16 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:05.315 11:08:16 -- target/tls.sh@213 -- # echo '{ 00:12:05.315 "subsystems": [ 00:12:05.315 { 00:12:05.315 "subsystem": "iobuf", 00:12:05.315 "config": [ 00:12:05.315 { 00:12:05.315 "method": "iobuf_set_options", 00:12:05.315 "params": { 00:12:05.315 "small_pool_count": 8192, 00:12:05.315 "large_pool_count": 1024, 00:12:05.315 "small_bufsize": 8192, 00:12:05.315 "large_bufsize": 135168 00:12:05.315 } 00:12:05.315 } 00:12:05.315 ] 00:12:05.315 }, 00:12:05.315 { 00:12:05.315 "subsystem": "sock", 00:12:05.315 "config": [ 00:12:05.315 { 00:12:05.315 "method": "sock_impl_set_options", 00:12:05.315 "params": { 00:12:05.315 "impl_name": "uring", 00:12:05.315 "recv_buf_size": 2097152, 00:12:05.315 "send_buf_size": 2097152, 00:12:05.315 "enable_recv_pipe": true, 00:12:05.315 "enable_quickack": false, 00:12:05.315 "enable_placement_id": 0, 00:12:05.315 "enable_zerocopy_send_server": false, 00:12:05.315 "enable_zerocopy_send_client": false, 00:12:05.315 "zerocopy_threshold": 0, 00:12:05.316 "tls_version": 0, 00:12:05.316 "enable_ktls": false 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "sock_impl_set_options", 00:12:05.316 "params": { 00:12:05.316 "impl_name": "posix", 00:12:05.316 "recv_buf_size": 2097152, 00:12:05.316 "send_buf_size": 2097152, 00:12:05.316 "enable_recv_pipe": true, 00:12:05.316 "enable_quickack": false, 00:12:05.316 "enable_placement_id": 0, 00:12:05.316 "enable_zerocopy_send_server": true, 00:12:05.316 "enable_zerocopy_send_client": false, 00:12:05.316 "zerocopy_threshold": 0, 00:12:05.316 "tls_version": 0, 00:12:05.316 "enable_ktls": false 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "sock_impl_set_options", 00:12:05.316 "params": { 00:12:05.316 "impl_name": "ssl", 00:12:05.316 "recv_buf_size": 4096, 00:12:05.316 "send_buf_size": 4096, 00:12:05.316 "enable_recv_pipe": true, 00:12:05.316 "enable_quickack": false, 00:12:05.316 "enable_placement_id": 0, 00:12:05.316 "enable_zerocopy_send_server": true, 00:12:05.316 "enable_zerocopy_send_client": false, 00:12:05.316 "zerocopy_threshold": 0, 00:12:05.316 "tls_version": 0, 00:12:05.316 "enable_ktls": false 00:12:05.316 } 00:12:05.316 } 00:12:05.316 ] 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "subsystem": "vmd", 00:12:05.316 "config": [] 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "subsystem": "accel", 00:12:05.316 "config": [ 00:12:05.316 { 00:12:05.316 "method": "accel_set_options", 00:12:05.316 "params": { 00:12:05.316 "small_cache_size": 128, 00:12:05.316 "large_cache_size": 16, 00:12:05.316 "task_count": 2048, 00:12:05.316 "sequence_count": 2048, 00:12:05.316 "buf_count": 2048 00:12:05.316 } 00:12:05.316 } 00:12:05.316 ] 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "subsystem": "bdev", 00:12:05.316 "config": [ 00:12:05.316 { 00:12:05.316 "method": "bdev_set_options", 00:12:05.316 "params": { 00:12:05.316 "bdev_io_pool_size": 65535, 00:12:05.316 "bdev_io_cache_size": 256, 00:12:05.316 "bdev_auto_examine": true, 00:12:05.316 "iobuf_small_cache_size": 128, 00:12:05.316 "iobuf_large_cache_size": 16 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_raid_set_options", 00:12:05.316 "params": { 00:12:05.316 "process_window_size_kb": 1024 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_iscsi_set_options", 00:12:05.316 "params": { 00:12:05.316 "timeout_sec": 30 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_nvme_set_options", 00:12:05.316 "params": { 00:12:05.316 "action_on_timeout": "none", 00:12:05.316 "timeout_us": 0, 00:12:05.316 "timeout_admin_us": 0, 00:12:05.316 "keep_alive_timeout_ms": 10000, 00:12:05.316 "transport_retry_count": 4, 00:12:05.316 "arbitration_burst": 0, 00:12:05.316 "low_priority_weight": 0, 00:12:05.316 "medium_priority_weight": 0, 00:12:05.316 "high_priority_weight": 0, 00:12:05.316 "nvme_adminq_poll_period_us": 10000, 00:12:05.316 "nvme_ioq_poll_period_us": 0, 00:12:05.316 "io_queue_requests": 512, 00:12:05.316 "delay_cmd_submit": true, 00:12:05.316 "bdev_retry_count": 3, 00:12:05.316 "transport_ack_timeout": 0, 00:12:05.316 "ctrlr_loss_timeout_sec": 0, 00:12:05.316 "reconnect_delay_sec": 0, 00:12:05.316 "fast_io_fail_timeout_sec": 0, 00:12:05.316 "generate_uuids": false, 00:12:05.316 "transport_tos": 0, 00:12:05.316 "io_path_stat": false, 00:12:05.316 "allow_accel_sequence": false 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_nvme_attach_controller", 00:12:05.316 "params": { 00:12:05.316 "name": "TLSTEST", 00:12:05.316 "trtype": "TCP", 00:12:05.316 "adrfam": "IPv4", 00:12:05.316 "traddr": "10.0.0.2", 00:12:05.316 "trsvcid": "4420", 00:12:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.316 "prchk_reftag": false, 00:12:05.316 "prchk_guard": false, 00:12:05.316 "ctrlr_loss_timeout_sec": 0, 00:12:05.316 "reconnect_delay_sec": 0, 00:12:05.316 "fast_io_fail_timeout_sec": 0, 00:12:05.316 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:05.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.316 "hdgst": false, 00:12:05.316 "ddgst": false 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_nvme_set_hotplug", 00:12:05.316 "params": { 00:12:05.316 "period_us": 100000, 00:12:05.316 "enable": false 00:12:05.316 } 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "method": "bdev_wait_for_examine" 00:12:05.316 } 00:12:05.316 ] 00:12:05.316 }, 00:12:05.316 { 00:12:05.316 "subsystem": "nbd", 00:12:05.316 "config": [] 00:12:05.316 } 00:12:05.316 ] 00:12:05.316 }' 00:12:05.316 [2024-12-06 11:08:16.385439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:05.316 [2024-12-06 11:08:16.385558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77407 ] 00:12:05.574 [2024-12-06 11:08:16.527447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.574 [2024-12-06 11:08:16.566638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.574 [2024-12-06 11:08:16.692461] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:06.507 11:08:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.508 11:08:17 -- common/autotest_common.sh@862 -- # return 0 00:12:06.508 11:08:17 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:06.508 Running I/O for 10 seconds... 00:12:16.479 00:12:16.479 Latency(us) 00:12:16.479 [2024-12-06T11:08:27.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.479 [2024-12-06T11:08:27.626Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:16.479 Verification LBA range: start 0x0 length 0x2000 00:12:16.479 TLSTESTn1 : 10.01 6912.05 27.00 0.00 0.00 18492.83 2338.44 21328.99 00:12:16.479 [2024-12-06T11:08:27.626Z] =================================================================================================================== 00:12:16.480 [2024-12-06T11:08:27.627Z] Total : 6912.05 27.00 0.00 0.00 18492.83 2338.44 21328.99 00:12:16.480 0 00:12:16.480 11:08:27 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:16.480 11:08:27 -- target/tls.sh@223 -- # killprocess 77407 00:12:16.480 11:08:27 -- common/autotest_common.sh@936 -- # '[' -z 77407 ']' 00:12:16.480 11:08:27 -- common/autotest_common.sh@940 -- # kill -0 77407 00:12:16.480 11:08:27 -- common/autotest_common.sh@941 -- # uname 00:12:16.480 11:08:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.480 11:08:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77407 00:12:16.480 killing process with pid 77407 00:12:16.480 Received shutdown signal, test time was about 10.000000 seconds 00:12:16.480 00:12:16.480 Latency(us) 00:12:16.480 [2024-12-06T11:08:27.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.480 [2024-12-06T11:08:27.627Z] =================================================================================================================== 00:12:16.480 [2024-12-06T11:08:27.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:16.480 11:08:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:16.480 11:08:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:16.480 11:08:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77407' 00:12:16.480 11:08:27 -- common/autotest_common.sh@955 -- # kill 77407 00:12:16.480 11:08:27 -- common/autotest_common.sh@960 -- # wait 77407 00:12:16.480 11:08:27 -- target/tls.sh@224 -- # killprocess 77379 00:12:16.480 11:08:27 -- common/autotest_common.sh@936 -- # '[' -z 77379 ']' 00:12:16.480 11:08:27 -- common/autotest_common.sh@940 -- # kill -0 77379 00:12:16.480 11:08:27 -- common/autotest_common.sh@941 -- # uname 00:12:16.480 11:08:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.480 11:08:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77379 00:12:16.791 killing process with pid 77379 00:12:16.791 11:08:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:16.791 11:08:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:16.791 11:08:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77379' 00:12:16.791 11:08:27 -- common/autotest_common.sh@955 -- # kill 77379 00:12:16.791 11:08:27 -- common/autotest_common.sh@960 -- # wait 77379 00:12:16.791 11:08:27 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:16.791 11:08:27 -- target/tls.sh@227 -- # cleanup 00:12:16.791 11:08:27 -- target/tls.sh@15 -- # process_shm --id 0 00:12:16.791 11:08:27 -- common/autotest_common.sh@806 -- # type=--id 00:12:16.791 11:08:27 -- common/autotest_common.sh@807 -- # id=0 00:12:16.791 11:08:27 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:16.791 11:08:27 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:16.791 11:08:27 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:16.791 11:08:27 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:16.791 11:08:27 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:16.791 11:08:27 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:16.791 nvmf_trace.0 00:12:16.791 11:08:27 -- common/autotest_common.sh@821 -- # return 0 00:12:16.791 11:08:27 -- target/tls.sh@16 -- # killprocess 77407 00:12:16.791 11:08:27 -- common/autotest_common.sh@936 -- # '[' -z 77407 ']' 00:12:16.791 11:08:27 -- common/autotest_common.sh@940 -- # kill -0 77407 00:12:16.791 Process with pid 77407 is not found 00:12:16.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77407) - No such process 00:12:16.791 11:08:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77407 is not found' 00:12:16.791 11:08:27 -- target/tls.sh@17 -- # nvmftestfini 00:12:16.791 11:08:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:16.791 11:08:27 -- nvmf/common.sh@116 -- # sync 00:12:17.078 11:08:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:17.078 11:08:27 -- nvmf/common.sh@119 -- # set +e 00:12:17.078 11:08:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:17.078 11:08:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:17.078 rmmod nvme_tcp 00:12:17.078 rmmod nvme_fabrics 00:12:17.078 rmmod nvme_keyring 00:12:17.078 11:08:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:17.078 11:08:27 -- nvmf/common.sh@123 -- # set -e 00:12:17.078 11:08:27 -- nvmf/common.sh@124 -- # return 0 00:12:17.078 11:08:27 -- nvmf/common.sh@477 -- # '[' -n 77379 ']' 00:12:17.078 11:08:27 -- nvmf/common.sh@478 -- # killprocess 77379 00:12:17.078 11:08:27 -- common/autotest_common.sh@936 -- # '[' -z 77379 ']' 00:12:17.078 Process with pid 77379 is not found 00:12:17.078 11:08:27 -- common/autotest_common.sh@940 -- # kill -0 77379 00:12:17.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77379) - No such process 00:12:17.078 11:08:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77379 is not found' 00:12:17.078 11:08:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:17.078 11:08:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:17.078 11:08:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:17.078 11:08:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.078 11:08:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:17.078 11:08:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.078 11:08:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.078 11:08:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.078 11:08:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:17.078 11:08:28 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:17.078 ************************************ 00:12:17.078 END TEST nvmf_tls 00:12:17.078 ************************************ 00:12:17.078 00:12:17.078 real 1m6.190s 00:12:17.078 user 1m42.591s 00:12:17.078 sys 0m22.896s 00:12:17.078 11:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:17.078 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:17.078 11:08:28 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:17.078 11:08:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:17.078 11:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.078 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:17.078 ************************************ 00:12:17.078 START TEST nvmf_fips 00:12:17.078 ************************************ 00:12:17.078 11:08:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:17.078 * Looking for test storage... 00:12:17.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:17.078 11:08:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:17.078 11:08:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:17.078 11:08:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:17.078 11:08:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:17.078 11:08:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:17.078 11:08:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:17.078 11:08:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:17.078 11:08:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:17.078 11:08:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:17.078 11:08:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.078 11:08:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:17.078 11:08:28 -- scripts/common.sh@337 -- # local 'op=<' 00:12:17.078 11:08:28 -- scripts/common.sh@339 -- # ver1_l=2 00:12:17.078 11:08:28 -- scripts/common.sh@340 -- # ver2_l=1 00:12:17.078 11:08:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:17.078 11:08:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:17.078 11:08:28 -- scripts/common.sh@344 -- # : 1 00:12:17.078 11:08:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:17.078 11:08:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.079 11:08:28 -- scripts/common.sh@364 -- # decimal 1 00:12:17.079 11:08:28 -- scripts/common.sh@352 -- # local d=1 00:12:17.079 11:08:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.079 11:08:28 -- scripts/common.sh@354 -- # echo 1 00:12:17.079 11:08:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:17.339 11:08:28 -- scripts/common.sh@365 -- # decimal 2 00:12:17.339 11:08:28 -- scripts/common.sh@352 -- # local d=2 00:12:17.339 11:08:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.339 11:08:28 -- scripts/common.sh@354 -- # echo 2 00:12:17.339 11:08:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:17.339 11:08:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:17.339 11:08:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:17.339 11:08:28 -- scripts/common.sh@367 -- # return 0 00:12:17.339 11:08:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.339 11:08:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:17.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.339 --rc genhtml_branch_coverage=1 00:12:17.339 --rc genhtml_function_coverage=1 00:12:17.339 --rc genhtml_legend=1 00:12:17.339 --rc geninfo_all_blocks=1 00:12:17.339 --rc geninfo_unexecuted_blocks=1 00:12:17.339 00:12:17.339 ' 00:12:17.339 11:08:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:17.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.339 --rc genhtml_branch_coverage=1 00:12:17.339 --rc genhtml_function_coverage=1 00:12:17.339 --rc genhtml_legend=1 00:12:17.339 --rc geninfo_all_blocks=1 00:12:17.339 --rc geninfo_unexecuted_blocks=1 00:12:17.339 00:12:17.339 ' 00:12:17.339 11:08:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:17.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.339 --rc genhtml_branch_coverage=1 00:12:17.339 --rc genhtml_function_coverage=1 00:12:17.339 --rc genhtml_legend=1 00:12:17.339 --rc geninfo_all_blocks=1 00:12:17.339 --rc geninfo_unexecuted_blocks=1 00:12:17.339 00:12:17.339 ' 00:12:17.339 11:08:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:17.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.339 --rc genhtml_branch_coverage=1 00:12:17.339 --rc genhtml_function_coverage=1 00:12:17.339 --rc genhtml_legend=1 00:12:17.339 --rc geninfo_all_blocks=1 00:12:17.339 --rc geninfo_unexecuted_blocks=1 00:12:17.339 00:12:17.339 ' 00:12:17.339 11:08:28 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:17.339 11:08:28 -- nvmf/common.sh@7 -- # uname -s 00:12:17.339 11:08:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.339 11:08:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.339 11:08:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.339 11:08:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.339 11:08:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.339 11:08:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.339 11:08:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.339 11:08:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.339 11:08:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.339 11:08:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.339 11:08:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:17.339 11:08:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:17.339 11:08:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.339 11:08:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.339 11:08:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:17.339 11:08:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.339 11:08:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.339 11:08:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.339 11:08:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.339 11:08:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.339 11:08:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.339 11:08:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.339 11:08:28 -- paths/export.sh@5 -- # export PATH 00:12:17.339 11:08:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.339 11:08:28 -- nvmf/common.sh@46 -- # : 0 00:12:17.339 11:08:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:17.339 11:08:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:17.339 11:08:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:17.339 11:08:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.339 11:08:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.339 11:08:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:17.339 11:08:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:17.339 11:08:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:17.339 11:08:28 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:17.339 11:08:28 -- fips/fips.sh@89 -- # check_openssl_version 00:12:17.339 11:08:28 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:17.339 11:08:28 -- fips/fips.sh@85 -- # openssl version 00:12:17.339 11:08:28 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:17.339 11:08:28 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:17.339 11:08:28 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:17.339 11:08:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:17.339 11:08:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:17.339 11:08:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:17.339 11:08:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:17.340 11:08:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.340 11:08:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:17.340 11:08:28 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:17.340 11:08:28 -- scripts/common.sh@339 -- # ver1_l=3 00:12:17.340 11:08:28 -- scripts/common.sh@340 -- # ver2_l=3 00:12:17.340 11:08:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:17.340 11:08:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:17.340 11:08:28 -- scripts/common.sh@347 -- # : 1 00:12:17.340 11:08:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:17.340 11:08:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.340 11:08:28 -- scripts/common.sh@364 -- # decimal 3 00:12:17.340 11:08:28 -- scripts/common.sh@352 -- # local d=3 00:12:17.340 11:08:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:17.340 11:08:28 -- scripts/common.sh@354 -- # echo 3 00:12:17.340 11:08:28 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:17.340 11:08:28 -- scripts/common.sh@365 -- # decimal 3 00:12:17.340 11:08:28 -- scripts/common.sh@352 -- # local d=3 00:12:17.340 11:08:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:17.340 11:08:28 -- scripts/common.sh@354 -- # echo 3 00:12:17.340 11:08:28 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:17.340 11:08:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:17.340 11:08:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:17.340 11:08:28 -- scripts/common.sh@363 -- # (( v++ )) 00:12:17.340 11:08:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.340 11:08:28 -- scripts/common.sh@364 -- # decimal 1 00:12:17.340 11:08:28 -- scripts/common.sh@352 -- # local d=1 00:12:17.340 11:08:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.340 11:08:28 -- scripts/common.sh@354 -- # echo 1 00:12:17.340 11:08:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:17.340 11:08:28 -- scripts/common.sh@365 -- # decimal 0 00:12:17.340 11:08:28 -- scripts/common.sh@352 -- # local d=0 00:12:17.340 11:08:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:17.340 11:08:28 -- scripts/common.sh@354 -- # echo 0 00:12:17.340 11:08:28 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:17.340 11:08:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:17.340 11:08:28 -- scripts/common.sh@366 -- # return 0 00:12:17.340 11:08:28 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:17.340 11:08:28 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:17.340 11:08:28 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:17.340 11:08:28 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:17.340 11:08:28 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:17.340 11:08:28 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:17.340 11:08:28 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:17.340 11:08:28 -- fips/fips.sh@113 -- # build_openssl_config 00:12:17.340 11:08:28 -- fips/fips.sh@37 -- # cat 00:12:17.340 11:08:28 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:17.340 11:08:28 -- fips/fips.sh@58 -- # cat - 00:12:17.340 11:08:28 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:17.340 11:08:28 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:17.340 11:08:28 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:17.340 11:08:28 -- fips/fips.sh@116 -- # openssl list -providers 00:12:17.340 11:08:28 -- fips/fips.sh@116 -- # grep name 00:12:17.340 11:08:28 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:17.340 11:08:28 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:17.340 11:08:28 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:17.340 11:08:28 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:17.340 11:08:28 -- fips/fips.sh@127 -- # : 00:12:17.340 11:08:28 -- common/autotest_common.sh@650 -- # local es=0 00:12:17.340 11:08:28 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:17.340 11:08:28 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:17.340 11:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.340 11:08:28 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:17.340 11:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.340 11:08:28 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:17.340 11:08:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.340 11:08:28 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:17.340 11:08:28 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:17.340 11:08:28 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:17.340 Error setting digest 00:12:17.340 40820EA0747F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:17.340 40820EA0747F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:17.340 11:08:28 -- common/autotest_common.sh@653 -- # es=1 00:12:17.340 11:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.340 11:08:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.340 11:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.340 11:08:28 -- fips/fips.sh@130 -- # nvmftestinit 00:12:17.340 11:08:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:17.340 11:08:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.340 11:08:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:17.340 11:08:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:17.340 11:08:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:17.340 11:08:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.340 11:08:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.340 11:08:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.340 11:08:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:17.340 11:08:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:17.340 11:08:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:17.340 11:08:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:17.340 11:08:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:17.340 11:08:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:17.340 11:08:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.340 11:08:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.340 11:08:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:17.340 11:08:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:17.340 11:08:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:17.340 11:08:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:17.340 11:08:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:17.340 11:08:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.340 11:08:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:17.340 11:08:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:17.340 11:08:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:17.340 11:08:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:17.340 11:08:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:17.340 11:08:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:17.340 Cannot find device "nvmf_tgt_br" 00:12:17.340 11:08:28 -- nvmf/common.sh@154 -- # true 00:12:17.340 11:08:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.340 Cannot find device "nvmf_tgt_br2" 00:12:17.340 11:08:28 -- nvmf/common.sh@155 -- # true 00:12:17.340 11:08:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:17.340 11:08:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:17.340 Cannot find device "nvmf_tgt_br" 00:12:17.599 11:08:28 -- nvmf/common.sh@157 -- # true 00:12:17.599 11:08:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:17.599 Cannot find device "nvmf_tgt_br2" 00:12:17.599 11:08:28 -- nvmf/common.sh@158 -- # true 00:12:17.599 11:08:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:17.600 11:08:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:17.600 11:08:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.600 11:08:28 -- nvmf/common.sh@161 -- # true 00:12:17.600 11:08:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.600 11:08:28 -- nvmf/common.sh@162 -- # true 00:12:17.600 11:08:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:17.600 11:08:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:17.600 11:08:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:17.600 11:08:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:17.600 11:08:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:17.600 11:08:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:17.600 11:08:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:17.600 11:08:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:17.600 11:08:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:17.600 11:08:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:17.600 11:08:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:17.600 11:08:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:17.600 11:08:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:17.600 11:08:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:17.600 11:08:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:17.600 11:08:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:17.600 11:08:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:17.600 11:08:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:17.600 11:08:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:17.600 11:08:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:17.600 11:08:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:17.600 11:08:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:17.859 11:08:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:17.859 11:08:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:17.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:17.859 00:12:17.859 --- 10.0.0.2 ping statistics --- 00:12:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.859 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:17.859 11:08:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:17.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:17.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:17.859 00:12:17.859 --- 10.0.0.3 ping statistics --- 00:12:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.859 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:17.859 11:08:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:17.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:17.859 00:12:17.859 --- 10.0.0.1 ping statistics --- 00:12:17.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.859 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:17.859 11:08:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.859 11:08:28 -- nvmf/common.sh@421 -- # return 0 00:12:17.859 11:08:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:17.859 11:08:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.859 11:08:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:17.859 11:08:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:17.859 11:08:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.859 11:08:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:17.859 11:08:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:17.859 11:08:28 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:17.859 11:08:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:17.859 11:08:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.859 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:17.859 11:08:28 -- nvmf/common.sh@469 -- # nvmfpid=77762 00:12:17.859 11:08:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:17.859 11:08:28 -- nvmf/common.sh@470 -- # waitforlisten 77762 00:12:17.859 11:08:28 -- common/autotest_common.sh@829 -- # '[' -z 77762 ']' 00:12:17.859 11:08:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.859 11:08:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.859 11:08:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.859 11:08:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.859 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:17.859 [2024-12-06 11:08:28.866218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.859 [2024-12-06 11:08:28.866308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.119 [2024-12-06 11:08:29.007961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.119 [2024-12-06 11:08:29.038975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:18.119 [2024-12-06 11:08:29.039394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.119 [2024-12-06 11:08:29.039416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.119 [2024-12-06 11:08:29.039427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.119 [2024-12-06 11:08:29.039454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.056 11:08:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.056 11:08:29 -- common/autotest_common.sh@862 -- # return 0 00:12:19.056 11:08:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:19.056 11:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.056 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:19.056 11:08:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.056 11:08:29 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:19.056 11:08:29 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:19.056 11:08:29 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:19.056 11:08:29 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:19.056 11:08:29 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:19.056 11:08:29 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:19.057 11:08:29 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:19.057 11:08:29 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.057 [2024-12-06 11:08:30.165174] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.057 [2024-12-06 11:08:30.181138] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:19.057 [2024-12-06 11:08:30.181313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.316 malloc0 00:12:19.316 11:08:30 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:19.316 11:08:30 -- fips/fips.sh@147 -- # bdevperf_pid=77802 00:12:19.316 11:08:30 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:19.316 11:08:30 -- fips/fips.sh@148 -- # waitforlisten 77802 /var/tmp/bdevperf.sock 00:12:19.316 11:08:30 -- common/autotest_common.sh@829 -- # '[' -z 77802 ']' 00:12:19.316 11:08:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.316 11:08:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.316 11:08:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.316 11:08:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.316 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:19.316 [2024-12-06 11:08:30.293392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.316 [2024-12-06 11:08:30.293471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77802 ] 00:12:19.316 [2024-12-06 11:08:30.428972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.575 [2024-12-06 11:08:30.470490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.144 11:08:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.144 11:08:31 -- common/autotest_common.sh@862 -- # return 0 00:12:20.144 11:08:31 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:20.403 [2024-12-06 11:08:31.412328] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:20.403 TLSTESTn1 00:12:20.403 11:08:31 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:20.662 Running I/O for 10 seconds... 00:12:30.642 00:12:30.642 Latency(us) 00:12:30.642 [2024-12-06T11:08:41.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.642 [2024-12-06T11:08:41.789Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:30.642 Verification LBA range: start 0x0 length 0x2000 00:12:30.642 TLSTESTn1 : 10.01 6132.57 23.96 0.00 0.00 20839.50 4408.79 27644.28 00:12:30.642 [2024-12-06T11:08:41.789Z] =================================================================================================================== 00:12:30.642 [2024-12-06T11:08:41.789Z] Total : 6132.57 23.96 0.00 0.00 20839.50 4408.79 27644.28 00:12:30.642 0 00:12:30.642 11:08:41 -- fips/fips.sh@1 -- # cleanup 00:12:30.642 11:08:41 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:30.642 11:08:41 -- common/autotest_common.sh@806 -- # type=--id 00:12:30.642 11:08:41 -- common/autotest_common.sh@807 -- # id=0 00:12:30.642 11:08:41 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:30.642 11:08:41 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:30.642 11:08:41 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:30.642 11:08:41 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:30.642 11:08:41 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:30.642 11:08:41 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:30.642 nvmf_trace.0 00:12:30.642 11:08:41 -- common/autotest_common.sh@821 -- # return 0 00:12:30.642 11:08:41 -- fips/fips.sh@16 -- # killprocess 77802 00:12:30.642 11:08:41 -- common/autotest_common.sh@936 -- # '[' -z 77802 ']' 00:12:30.642 11:08:41 -- common/autotest_common.sh@940 -- # kill -0 77802 00:12:30.642 11:08:41 -- common/autotest_common.sh@941 -- # uname 00:12:30.642 11:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.642 11:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77802 00:12:30.642 killing process with pid 77802 00:12:30.642 Received shutdown signal, test time was about 10.000000 seconds 00:12:30.642 00:12:30.642 Latency(us) 00:12:30.642 [2024-12-06T11:08:41.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.642 [2024-12-06T11:08:41.789Z] =================================================================================================================== 00:12:30.642 [2024-12-06T11:08:41.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.642 11:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:30.642 11:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:30.642 11:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77802' 00:12:30.642 11:08:41 -- common/autotest_common.sh@955 -- # kill 77802 00:12:30.642 11:08:41 -- common/autotest_common.sh@960 -- # wait 77802 00:12:30.901 11:08:41 -- fips/fips.sh@17 -- # nvmftestfini 00:12:30.901 11:08:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.901 11:08:41 -- nvmf/common.sh@116 -- # sync 00:12:30.901 11:08:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.901 11:08:41 -- nvmf/common.sh@119 -- # set +e 00:12:30.901 11:08:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.901 11:08:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.901 rmmod nvme_tcp 00:12:30.901 rmmod nvme_fabrics 00:12:30.901 rmmod nvme_keyring 00:12:30.901 11:08:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.901 11:08:41 -- nvmf/common.sh@123 -- # set -e 00:12:30.901 11:08:41 -- nvmf/common.sh@124 -- # return 0 00:12:30.901 11:08:41 -- nvmf/common.sh@477 -- # '[' -n 77762 ']' 00:12:30.901 11:08:41 -- nvmf/common.sh@478 -- # killprocess 77762 00:12:30.901 11:08:41 -- common/autotest_common.sh@936 -- # '[' -z 77762 ']' 00:12:30.901 11:08:41 -- common/autotest_common.sh@940 -- # kill -0 77762 00:12:30.901 11:08:41 -- common/autotest_common.sh@941 -- # uname 00:12:30.901 11:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.901 11:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77762 00:12:30.901 killing process with pid 77762 00:12:30.901 11:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:30.901 11:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:30.901 11:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77762' 00:12:30.901 11:08:41 -- common/autotest_common.sh@955 -- # kill 77762 00:12:30.901 11:08:41 -- common/autotest_common.sh@960 -- # wait 77762 00:12:31.159 11:08:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.159 11:08:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.159 11:08:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.159 11:08:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.159 11:08:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.159 11:08:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.160 11:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.160 11:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.160 11:08:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.160 11:08:42 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:31.160 00:12:31.160 real 0m14.118s 00:12:31.160 user 0m18.947s 00:12:31.160 sys 0m5.792s 00:12:31.160 11:08:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.160 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:31.160 ************************************ 00:12:31.160 END TEST nvmf_fips 00:12:31.160 ************************************ 00:12:31.160 11:08:42 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:31.160 11:08:42 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:31.160 11:08:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.160 11:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.160 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:31.160 ************************************ 00:12:31.160 START TEST nvmf_fuzz 00:12:31.160 ************************************ 00:12:31.160 11:08:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:31.160 * Looking for test storage... 00:12:31.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.444 11:08:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.444 11:08:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.444 11:08:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.444 11:08:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.444 11:08:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.444 11:08:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.444 11:08:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.444 11:08:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.444 11:08:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.444 11:08:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.444 11:08:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.444 11:08:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.444 11:08:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.444 11:08:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.444 11:08:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.444 11:08:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.444 11:08:42 -- scripts/common.sh@344 -- # : 1 00:12:31.444 11:08:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.444 11:08:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.444 11:08:42 -- scripts/common.sh@364 -- # decimal 1 00:12:31.444 11:08:42 -- scripts/common.sh@352 -- # local d=1 00:12:31.444 11:08:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.444 11:08:42 -- scripts/common.sh@354 -- # echo 1 00:12:31.444 11:08:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.444 11:08:42 -- scripts/common.sh@365 -- # decimal 2 00:12:31.444 11:08:42 -- scripts/common.sh@352 -- # local d=2 00:12:31.444 11:08:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.444 11:08:42 -- scripts/common.sh@354 -- # echo 2 00:12:31.444 11:08:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.444 11:08:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.444 11:08:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.444 11:08:42 -- scripts/common.sh@367 -- # return 0 00:12:31.444 11:08:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.444 11:08:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.444 --rc genhtml_branch_coverage=1 00:12:31.444 --rc genhtml_function_coverage=1 00:12:31.444 --rc genhtml_legend=1 00:12:31.444 --rc geninfo_all_blocks=1 00:12:31.444 --rc geninfo_unexecuted_blocks=1 00:12:31.444 00:12:31.444 ' 00:12:31.444 11:08:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.444 --rc genhtml_branch_coverage=1 00:12:31.444 --rc genhtml_function_coverage=1 00:12:31.444 --rc genhtml_legend=1 00:12:31.444 --rc geninfo_all_blocks=1 00:12:31.444 --rc geninfo_unexecuted_blocks=1 00:12:31.444 00:12:31.444 ' 00:12:31.444 11:08:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.444 --rc genhtml_branch_coverage=1 00:12:31.444 --rc genhtml_function_coverage=1 00:12:31.444 --rc genhtml_legend=1 00:12:31.444 --rc geninfo_all_blocks=1 00:12:31.444 --rc geninfo_unexecuted_blocks=1 00:12:31.444 00:12:31.444 ' 00:12:31.444 11:08:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.444 --rc genhtml_branch_coverage=1 00:12:31.444 --rc genhtml_function_coverage=1 00:12:31.444 --rc genhtml_legend=1 00:12:31.444 --rc geninfo_all_blocks=1 00:12:31.444 --rc geninfo_unexecuted_blocks=1 00:12:31.444 00:12:31.444 ' 00:12:31.444 11:08:42 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.444 11:08:42 -- nvmf/common.sh@7 -- # uname -s 00:12:31.444 11:08:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.444 11:08:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.444 11:08:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.444 11:08:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.444 11:08:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.444 11:08:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.444 11:08:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.444 11:08:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.444 11:08:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.444 11:08:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:31.444 11:08:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:31.444 11:08:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.444 11:08:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.444 11:08:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.444 11:08:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.444 11:08:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.444 11:08:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.444 11:08:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.444 11:08:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.444 11:08:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.444 11:08:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.444 11:08:42 -- paths/export.sh@5 -- # export PATH 00:12:31.444 11:08:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.444 11:08:42 -- nvmf/common.sh@46 -- # : 0 00:12:31.444 11:08:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.444 11:08:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.444 11:08:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.444 11:08:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.444 11:08:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.444 11:08:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.444 11:08:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.444 11:08:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.444 11:08:42 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:31.444 11:08:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.444 11:08:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.444 11:08:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.444 11:08:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.444 11:08:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.444 11:08:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.444 11:08:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.444 11:08:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.444 11:08:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.444 11:08:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.444 11:08:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.444 11:08:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.444 11:08:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.444 11:08:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.444 11:08:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.444 11:08:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.444 11:08:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.444 11:08:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.444 11:08:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.444 11:08:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.444 11:08:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.444 11:08:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.444 11:08:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.444 11:08:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.444 Cannot find device "nvmf_tgt_br" 00:12:31.444 11:08:42 -- nvmf/common.sh@154 -- # true 00:12:31.444 11:08:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.444 Cannot find device "nvmf_tgt_br2" 00:12:31.444 11:08:42 -- nvmf/common.sh@155 -- # true 00:12:31.444 11:08:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.444 11:08:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.444 Cannot find device "nvmf_tgt_br" 00:12:31.444 11:08:42 -- nvmf/common.sh@157 -- # true 00:12:31.444 11:08:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.444 Cannot find device "nvmf_tgt_br2" 00:12:31.444 11:08:42 -- nvmf/common.sh@158 -- # true 00:12:31.444 11:08:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.444 11:08:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.444 11:08:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.703 11:08:42 -- nvmf/common.sh@161 -- # true 00:12:31.703 11:08:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.703 11:08:42 -- nvmf/common.sh@162 -- # true 00:12:31.703 11:08:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.703 11:08:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.703 11:08:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.703 11:08:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.703 11:08:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.703 11:08:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.704 11:08:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.704 11:08:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.704 11:08:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.704 11:08:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.704 11:08:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.704 11:08:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.704 11:08:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.704 11:08:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.704 11:08:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.704 11:08:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.704 11:08:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.704 11:08:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.704 11:08:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.704 11:08:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.704 11:08:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.704 11:08:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.704 11:08:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.704 11:08:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:31.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:31.704 00:12:31.704 --- 10.0.0.2 ping statistics --- 00:12:31.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.704 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:31.704 11:08:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:31.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:31.704 00:12:31.704 --- 10.0.0.3 ping statistics --- 00:12:31.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.704 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:31.704 11:08:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:31.704 00:12:31.704 --- 10.0.0.1 ping statistics --- 00:12:31.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.704 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:31.704 11:08:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.704 11:08:42 -- nvmf/common.sh@421 -- # return 0 00:12:31.704 11:08:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:31.704 11:08:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.704 11:08:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:31.704 11:08:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:31.704 11:08:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.704 11:08:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:31.704 11:08:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:31.704 11:08:42 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78133 00:12:31.704 11:08:42 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.704 11:08:42 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78133 00:12:31.704 11:08:42 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.704 11:08:42 -- common/autotest_common.sh@829 -- # '[' -z 78133 ']' 00:12:31.704 11:08:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.704 11:08:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.704 11:08:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.704 11:08:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.704 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:33.080 11:08:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.081 11:08:43 -- common/autotest_common.sh@862 -- # return 0 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.081 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:33.081 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 Malloc0 00:12:33.081 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.081 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.081 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.081 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:33.081 11:08:43 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:33.081 Shutting down the fuzz application 00:12:33.081 11:08:44 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:33.648 Shutting down the fuzz application 00:12:33.648 11:08:44 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.648 11:08:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.648 11:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 11:08:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.648 11:08:44 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:33.648 11:08:44 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:33.648 11:08:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.648 11:08:44 -- nvmf/common.sh@116 -- # sync 00:12:33.648 11:08:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.648 11:08:44 -- nvmf/common.sh@119 -- # set +e 00:12:33.648 11:08:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.648 11:08:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.648 rmmod nvme_tcp 00:12:33.648 rmmod nvme_fabrics 00:12:33.648 rmmod nvme_keyring 00:12:33.648 11:08:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:33.648 11:08:44 -- nvmf/common.sh@123 -- # set -e 00:12:33.648 11:08:44 -- nvmf/common.sh@124 -- # return 0 00:12:33.648 11:08:44 -- nvmf/common.sh@477 -- # '[' -n 78133 ']' 00:12:33.648 11:08:44 -- nvmf/common.sh@478 -- # killprocess 78133 00:12:33.648 11:08:44 -- common/autotest_common.sh@936 -- # '[' -z 78133 ']' 00:12:33.648 11:08:44 -- common/autotest_common.sh@940 -- # kill -0 78133 00:12:33.648 11:08:44 -- common/autotest_common.sh@941 -- # uname 00:12:33.648 11:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.648 11:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78133 00:12:33.648 killing process with pid 78133 00:12:33.648 11:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.648 11:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.648 11:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78133' 00:12:33.648 11:08:44 -- common/autotest_common.sh@955 -- # kill 78133 00:12:33.648 11:08:44 -- common/autotest_common.sh@960 -- # wait 78133 00:12:33.905 11:08:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:33.905 11:08:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:33.905 11:08:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:33.905 11:08:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.905 11:08:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:33.905 11:08:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.905 11:08:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.905 11:08:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.905 11:08:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:33.905 11:08:44 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:33.905 00:12:33.905 real 0m2.641s 00:12:33.905 user 0m2.783s 00:12:33.905 sys 0m0.554s 00:12:33.905 ************************************ 00:12:33.905 END TEST nvmf_fuzz 00:12:33.905 ************************************ 00:12:33.905 11:08:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.905 11:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:33.905 11:08:44 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:33.905 11:08:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:33.905 11:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.905 11:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:33.905 ************************************ 00:12:33.905 START TEST nvmf_multiconnection 00:12:33.905 ************************************ 00:12:33.905 11:08:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:33.905 * Looking for test storage... 00:12:33.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.905 11:08:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:33.905 11:08:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:33.905 11:08:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:34.165 11:08:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:34.165 11:08:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:34.165 11:08:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:34.165 11:08:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:34.165 11:08:45 -- scripts/common.sh@335 -- # IFS=.-: 00:12:34.165 11:08:45 -- scripts/common.sh@335 -- # read -ra ver1 00:12:34.165 11:08:45 -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.165 11:08:45 -- scripts/common.sh@336 -- # read -ra ver2 00:12:34.165 11:08:45 -- scripts/common.sh@337 -- # local 'op=<' 00:12:34.165 11:08:45 -- scripts/common.sh@339 -- # ver1_l=2 00:12:34.165 11:08:45 -- scripts/common.sh@340 -- # ver2_l=1 00:12:34.165 11:08:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:34.165 11:08:45 -- scripts/common.sh@343 -- # case "$op" in 00:12:34.165 11:08:45 -- scripts/common.sh@344 -- # : 1 00:12:34.166 11:08:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:34.166 11:08:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.166 11:08:45 -- scripts/common.sh@364 -- # decimal 1 00:12:34.166 11:08:45 -- scripts/common.sh@352 -- # local d=1 00:12:34.166 11:08:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.166 11:08:45 -- scripts/common.sh@354 -- # echo 1 00:12:34.166 11:08:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.166 11:08:45 -- scripts/common.sh@365 -- # decimal 2 00:12:34.166 11:08:45 -- scripts/common.sh@352 -- # local d=2 00:12:34.166 11:08:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.166 11:08:45 -- scripts/common.sh@354 -- # echo 2 00:12:34.166 11:08:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.166 11:08:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.166 11:08:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.166 11:08:45 -- scripts/common.sh@367 -- # return 0 00:12:34.166 11:08:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.166 11:08:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.166 --rc genhtml_branch_coverage=1 00:12:34.166 --rc genhtml_function_coverage=1 00:12:34.166 --rc genhtml_legend=1 00:12:34.166 --rc geninfo_all_blocks=1 00:12:34.166 --rc geninfo_unexecuted_blocks=1 00:12:34.166 00:12:34.166 ' 00:12:34.166 11:08:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.166 --rc genhtml_branch_coverage=1 00:12:34.166 --rc genhtml_function_coverage=1 00:12:34.166 --rc genhtml_legend=1 00:12:34.166 --rc geninfo_all_blocks=1 00:12:34.166 --rc geninfo_unexecuted_blocks=1 00:12:34.166 00:12:34.166 ' 00:12:34.166 11:08:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.166 --rc genhtml_branch_coverage=1 00:12:34.166 --rc genhtml_function_coverage=1 00:12:34.166 --rc genhtml_legend=1 00:12:34.166 --rc geninfo_all_blocks=1 00:12:34.166 --rc geninfo_unexecuted_blocks=1 00:12:34.166 00:12:34.166 ' 00:12:34.166 11:08:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.166 --rc genhtml_branch_coverage=1 00:12:34.166 --rc genhtml_function_coverage=1 00:12:34.166 --rc genhtml_legend=1 00:12:34.166 --rc geninfo_all_blocks=1 00:12:34.166 --rc geninfo_unexecuted_blocks=1 00:12:34.166 00:12:34.166 ' 00:12:34.166 11:08:45 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.166 11:08:45 -- nvmf/common.sh@7 -- # uname -s 00:12:34.166 11:08:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.166 11:08:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.166 11:08:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.166 11:08:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.166 11:08:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.166 11:08:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.166 11:08:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.166 11:08:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.166 11:08:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.166 11:08:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:34.166 11:08:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:12:34.166 11:08:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.166 11:08:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.166 11:08:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.166 11:08:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.166 11:08:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.166 11:08:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.166 11:08:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.166 11:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.166 11:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.166 11:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.166 11:08:45 -- paths/export.sh@5 -- # export PATH 00:12:34.166 11:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.166 11:08:45 -- nvmf/common.sh@46 -- # : 0 00:12:34.166 11:08:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.166 11:08:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.166 11:08:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.166 11:08:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.166 11:08:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.166 11:08:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.166 11:08:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.166 11:08:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.166 11:08:45 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.166 11:08:45 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.166 11:08:45 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:34.166 11:08:45 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:34.166 11:08:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.166 11:08:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.166 11:08:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.166 11:08:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.166 11:08:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.166 11:08:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.166 11:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.166 11:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.166 11:08:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:34.166 11:08:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:34.166 11:08:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.166 11:08:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.166 11:08:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.166 11:08:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:34.166 11:08:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.166 11:08:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.166 11:08:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.166 11:08:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.166 11:08:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.166 11:08:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.166 11:08:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.166 11:08:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.166 11:08:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:34.166 11:08:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:34.166 Cannot find device "nvmf_tgt_br" 00:12:34.166 11:08:45 -- nvmf/common.sh@154 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.166 Cannot find device "nvmf_tgt_br2" 00:12:34.166 11:08:45 -- nvmf/common.sh@155 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:34.166 11:08:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:34.166 Cannot find device "nvmf_tgt_br" 00:12:34.166 11:08:45 -- nvmf/common.sh@157 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:34.166 Cannot find device "nvmf_tgt_br2" 00:12:34.166 11:08:45 -- nvmf/common.sh@158 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:34.166 11:08:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:34.166 11:08:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.166 11:08:45 -- nvmf/common.sh@161 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.166 11:08:45 -- nvmf/common.sh@162 -- # true 00:12:34.166 11:08:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.166 11:08:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.166 11:08:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.166 11:08:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.166 11:08:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.423 11:08:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.423 11:08:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.423 11:08:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.423 11:08:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.423 11:08:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:34.423 11:08:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:34.423 11:08:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:34.423 11:08:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:34.423 11:08:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.423 11:08:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.423 11:08:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.423 11:08:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:34.423 11:08:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:34.423 11:08:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.423 11:08:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.423 11:08:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.423 11:08:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.423 11:08:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.423 11:08:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:34.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:34.423 00:12:34.423 --- 10.0.0.2 ping statistics --- 00:12:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.423 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:34.423 11:08:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:34.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:34.423 00:12:34.423 --- 10.0.0.3 ping statistics --- 00:12:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.423 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:34.423 11:08:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:34.423 00:12:34.423 --- 10.0.0.1 ping statistics --- 00:12:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.423 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:34.423 11:08:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.423 11:08:45 -- nvmf/common.sh@421 -- # return 0 00:12:34.423 11:08:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:34.423 11:08:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.423 11:08:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:34.423 11:08:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:34.423 11:08:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.423 11:08:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:34.423 11:08:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:34.423 11:08:45 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:34.423 11:08:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:34.423 11:08:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.423 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.423 11:08:45 -- nvmf/common.sh@469 -- # nvmfpid=78335 00:12:34.423 11:08:45 -- nvmf/common.sh@470 -- # waitforlisten 78335 00:12:34.423 11:08:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.423 11:08:45 -- common/autotest_common.sh@829 -- # '[' -z 78335 ']' 00:12:34.423 11:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.423 11:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.423 11:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.423 11:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.423 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.423 [2024-12-06 11:08:45.555311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:34.423 [2024-12-06 11:08:45.555439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.680 [2024-12-06 11:08:45.698359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.680 [2024-12-06 11:08:45.734100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:34.680 [2024-12-06 11:08:45.734259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.680 [2024-12-06 11:08:45.734273] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.680 [2024-12-06 11:08:45.734281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.680 [2024-12-06 11:08:45.734354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.680 [2024-12-06 11:08:45.735003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.680 [2024-12-06 11:08:45.735226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.680 [2024-12-06 11:08:45.735227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.680 11:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.680 11:08:45 -- common/autotest_common.sh@862 -- # return 0 00:12:34.680 11:08:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:34.680 11:08:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.680 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.938 11:08:45 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 [2024-12-06 11:08:45.857468] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:34.938 11:08:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:34.938 11:08:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 Malloc1 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 [2024-12-06 11:08:45.937767] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:34.938 11:08:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 Malloc2 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:34.938 11:08:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:34.938 11:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 Malloc3 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:34.938 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 Malloc4 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.938 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:34.938 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:34.938 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.938 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc5 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.197 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc6 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.197 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc7 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.197 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc8 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.197 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc9 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.197 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 Malloc10 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.197 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.197 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.197 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:35.197 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.198 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.198 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.198 11:08:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.198 11:08:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:35.198 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.198 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.468 Malloc11 00:12:35.468 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.468 11:08:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:35.468 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.468 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.468 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.468 11:08:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:35.468 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.468 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.468 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.468 11:08:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:35.468 11:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.468 11:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:35.468 11:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.468 11:08:46 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:35.468 11:08:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:35.468 11:08:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.468 11:08:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:35.468 11:08:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:35.468 11:08:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.468 11:08:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:35.468 11:08:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:37.407 11:08:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:37.407 11:08:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:37.407 11:08:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:37.407 11:08:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:37.407 11:08:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.407 11:08:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:37.407 11:08:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.407 11:08:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:37.666 11:08:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:37.666 11:08:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.666 11:08:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.666 11:08:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.666 11:08:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.572 11:08:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.572 11:08:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.572 11:08:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:39.572 11:08:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.572 11:08:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.572 11:08:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.572 11:08:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:39.572 11:08:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:39.831 11:08:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:39.831 11:08:50 -- common/autotest_common.sh@1187 -- # local i=0 00:12:39.831 11:08:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.831 11:08:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:39.831 11:08:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.735 11:08:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.736 11:08:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.736 11:08:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:41.736 11:08:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:41.736 11:08:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.736 11:08:52 -- common/autotest_common.sh@1197 -- # return 0 00:12:41.736 11:08:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:41.736 11:08:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:41.994 11:08:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:41.994 11:08:52 -- common/autotest_common.sh@1187 -- # local i=0 00:12:41.994 11:08:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.994 11:08:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:41.994 11:08:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:43.896 11:08:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:43.896 11:08:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:43.896 11:08:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:43.896 11:08:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:43.896 11:08:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.896 11:08:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:43.897 11:08:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:43.897 11:08:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:44.156 11:08:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:44.156 11:08:55 -- common/autotest_common.sh@1187 -- # local i=0 00:12:44.156 11:08:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.156 11:08:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:44.156 11:08:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:46.057 11:08:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:46.057 11:08:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:46.057 11:08:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:46.057 11:08:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:46.057 11:08:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.057 11:08:57 -- common/autotest_common.sh@1197 -- # return 0 00:12:46.057 11:08:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:46.057 11:08:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:46.316 11:08:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:46.316 11:08:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.316 11:08:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.316 11:08:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.316 11:08:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:48.220 11:08:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:48.220 11:08:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:48.220 11:08:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:48.221 11:08:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:48.221 11:08:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.221 11:08:59 -- common/autotest_common.sh@1197 -- # return 0 00:12:48.221 11:08:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:48.221 11:08:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:48.479 11:08:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:48.479 11:08:59 -- common/autotest_common.sh@1187 -- # local i=0 00:12:48.479 11:08:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.479 11:08:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:48.479 11:08:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:50.383 11:09:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:50.383 11:09:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:50.383 11:09:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:50.383 11:09:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:50.383 11:09:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.383 11:09:01 -- common/autotest_common.sh@1197 -- # return 0 00:12:50.383 11:09:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:50.383 11:09:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:50.642 11:09:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:50.642 11:09:01 -- common/autotest_common.sh@1187 -- # local i=0 00:12:50.642 11:09:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.642 11:09:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:50.642 11:09:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.546 11:09:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.546 11:09:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:52.546 11:09:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.546 11:09:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.546 11:09:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.546 11:09:03 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.546 11:09:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.546 11:09:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:52.804 11:09:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:52.804 11:09:03 -- common/autotest_common.sh@1187 -- # local i=0 00:12:52.804 11:09:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.804 11:09:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:52.804 11:09:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:54.703 11:09:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:54.703 11:09:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:54.703 11:09:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:54.703 11:09:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:54.703 11:09:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.703 11:09:05 -- common/autotest_common.sh@1197 -- # return 0 00:12:54.703 11:09:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:54.703 11:09:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:54.960 11:09:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:54.960 11:09:05 -- common/autotest_common.sh@1187 -- # local i=0 00:12:54.960 11:09:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.960 11:09:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:54.960 11:09:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:56.857 11:09:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:56.857 11:09:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:56.857 11:09:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:57.115 11:09:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:57.115 11:09:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.115 11:09:08 -- common/autotest_common.sh@1197 -- # return 0 00:12:57.115 11:09:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:57.115 11:09:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:57.115 11:09:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:57.115 11:09:08 -- common/autotest_common.sh@1187 -- # local i=0 00:12:57.115 11:09:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.115 11:09:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:57.115 11:09:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:59.641 11:09:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:59.641 11:09:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:59.641 11:09:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:59.641 11:09:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:59.641 11:09:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.641 11:09:10 -- common/autotest_common.sh@1197 -- # return 0 00:12:59.641 11:09:10 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:59.641 [global] 00:12:59.641 thread=1 00:12:59.641 invalidate=1 00:12:59.641 rw=read 00:12:59.641 time_based=1 00:12:59.641 runtime=10 00:12:59.641 ioengine=libaio 00:12:59.641 direct=1 00:12:59.641 bs=262144 00:12:59.641 iodepth=64 00:12:59.641 norandommap=1 00:12:59.641 numjobs=1 00:12:59.641 00:12:59.641 [job0] 00:12:59.641 filename=/dev/nvme0n1 00:12:59.641 [job1] 00:12:59.641 filename=/dev/nvme10n1 00:12:59.641 [job2] 00:12:59.641 filename=/dev/nvme1n1 00:12:59.641 [job3] 00:12:59.641 filename=/dev/nvme2n1 00:12:59.641 [job4] 00:12:59.641 filename=/dev/nvme3n1 00:12:59.641 [job5] 00:12:59.641 filename=/dev/nvme4n1 00:12:59.641 [job6] 00:12:59.641 filename=/dev/nvme5n1 00:12:59.641 [job7] 00:12:59.641 filename=/dev/nvme6n1 00:12:59.641 [job8] 00:12:59.641 filename=/dev/nvme7n1 00:12:59.641 [job9] 00:12:59.641 filename=/dev/nvme8n1 00:12:59.641 [job10] 00:12:59.641 filename=/dev/nvme9n1 00:12:59.641 Could not set queue depth (nvme0n1) 00:12:59.641 Could not set queue depth (nvme10n1) 00:12:59.641 Could not set queue depth (nvme1n1) 00:12:59.641 Could not set queue depth (nvme2n1) 00:12:59.641 Could not set queue depth (nvme3n1) 00:12:59.641 Could not set queue depth (nvme4n1) 00:12:59.641 Could not set queue depth (nvme5n1) 00:12:59.641 Could not set queue depth (nvme6n1) 00:12:59.641 Could not set queue depth (nvme7n1) 00:12:59.641 Could not set queue depth (nvme8n1) 00:12:59.641 Could not set queue depth (nvme9n1) 00:12:59.641 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:59.641 fio-3.35 00:12:59.641 Starting 11 threads 00:13:11.852 00:13:11.852 job0: (groupid=0, jobs=1): err= 0: pid=78781: Fri Dec 6 11:09:20 2024 00:13:11.852 read: IOPS=986, BW=247MiB/s (259MB/s)(2471MiB/10015msec) 00:13:11.852 slat (usec): min=19, max=32830, avg=1006.51, stdev=2238.93 00:13:11.852 clat (msec): min=12, max=123, avg=63.77, stdev= 9.91 00:13:11.852 lat (msec): min=15, max=123, avg=64.77, stdev= 9.98 00:13:11.852 clat percentiles (msec): 00:13:11.852 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:13:11.852 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:11.852 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 88], 00:13:11.852 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 116], 00:13:11.852 | 99.99th=[ 124] 00:13:11.852 bw ( KiB/s): min=164168, max=268288, per=12.05%, avg=251408.75, stdev=30605.46, samples=20 00:13:11.852 iops : min= 641, max= 1048, avg=981.95, stdev=119.57, samples=20 00:13:11.852 lat (msec) : 20=0.05%, 50=0.73%, 100=97.25%, 250=1.97% 00:13:11.852 cpu : usr=0.61%, sys=4.04%, ctx=2101, majf=0, minf=4097 00:13:11.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:11.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.852 issued rwts: total=9884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.852 job1: (groupid=0, jobs=1): err= 0: pid=78782: Fri Dec 6 11:09:20 2024 00:13:11.852 read: IOPS=470, BW=118MiB/s (123MB/s)(1190MiB/10113msec) 00:13:11.852 slat (usec): min=19, max=54086, avg=2097.18, stdev=5110.06 00:13:11.852 clat (msec): min=50, max=261, avg=133.69, stdev=24.63 00:13:11.852 lat (msec): min=50, max=261, avg=135.79, stdev=25.22 00:13:11.852 clat percentiles (msec): 00:13:11.852 | 1.00th=[ 59], 5.00th=[ 86], 10.00th=[ 93], 20.00th=[ 115], 00:13:11.852 | 30.00th=[ 126], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:13:11.852 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 157], 00:13:11.852 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 247], 99.95th=[ 247], 00:13:11.852 | 99.99th=[ 262] 00:13:11.852 bw ( KiB/s): min=102400, max=176640, per=5.76%, avg=120208.20, stdev=21524.77, samples=20 00:13:11.852 iops : min= 400, max= 690, avg=469.50, stdev=84.06, samples=20 00:13:11.852 lat (msec) : 100=14.45%, 250=85.53%, 500=0.02% 00:13:11.852 cpu : usr=0.28%, sys=2.06%, ctx=1144, majf=0, minf=4097 00:13:11.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:11.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.852 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.852 job2: (groupid=0, jobs=1): err= 0: pid=78783: Fri Dec 6 11:09:20 2024 00:13:11.852 read: IOPS=472, BW=118MiB/s (124MB/s)(1195MiB/10110msec) 00:13:11.852 slat (usec): min=20, max=37348, avg=2087.67, stdev=4877.80 00:13:11.852 clat (msec): min=32, max=252, avg=133.09, stdev=23.78 00:13:11.852 lat (msec): min=33, max=252, avg=135.18, stdev=24.33 00:13:11.852 clat percentiles (msec): 00:13:11.852 | 1.00th=[ 77], 5.00th=[ 86], 10.00th=[ 93], 20.00th=[ 115], 00:13:11.852 | 30.00th=[ 127], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 144], 00:13:11.852 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:13:11.852 | 99.00th=[ 176], 99.50th=[ 201], 99.90th=[ 241], 99.95th=[ 253], 00:13:11.852 | 99.99th=[ 253] 00:13:11.852 bw ( KiB/s): min=105984, max=174080, per=5.79%, avg=120694.15, stdev=20224.83, samples=20 00:13:11.852 iops : min= 414, max= 680, avg=471.40, stdev=78.98, samples=20 00:13:11.852 lat (msec) : 50=0.08%, 100=14.63%, 250=85.21%, 500=0.08% 00:13:11.852 cpu : usr=0.19%, sys=1.95%, ctx=1165, majf=0, minf=4097 00:13:11.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:11.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.852 issued rwts: total=4779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.852 job3: (groupid=0, jobs=1): err= 0: pid=78784: Fri Dec 6 11:09:20 2024 00:13:11.852 read: IOPS=1696, BW=424MiB/s (445MB/s)(4246MiB/10010msec) 00:13:11.852 slat (usec): min=15, max=18953, avg=584.77, stdev=1277.65 00:13:11.852 clat (usec): min=8625, max=94171, avg=37092.73, stdev=10630.40 00:13:11.852 lat (usec): min=9895, max=94229, avg=37677.50, stdev=10786.76 00:13:11.852 clat percentiles (usec): 00:13:11.852 | 1.00th=[28967], 5.00th=[30540], 10.00th=[31065], 20.00th=[31851], 00:13:11.852 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:13:11.852 | 70.00th=[34341], 80.00th=[35390], 90.00th=[58983], 95.00th=[64226], 00:13:11.852 | 99.00th=[72877], 99.50th=[74974], 99.90th=[83362], 99.95th=[87557], 00:13:11.852 | 99.99th=[91751] 00:13:11.853 bw ( KiB/s): min=232448, max=497152, per=20.77%, avg=433147.35, stdev=99714.79, samples=20 00:13:11.853 iops : min= 908, max= 1942, avg=1691.90, stdev=389.52, samples=20 00:13:11.853 lat (msec) : 10=0.02%, 20=0.11%, 50=86.28%, 100=13.59% 00:13:11.853 cpu : usr=0.73%, sys=5.23%, ctx=3659, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=16984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job4: (groupid=0, jobs=1): err= 0: pid=78785: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=691, BW=173MiB/s (181MB/s)(1741MiB/10073msec) 00:13:11.853 slat (usec): min=20, max=48976, avg=1431.28, stdev=3009.44 00:13:11.853 clat (msec): min=31, max=159, avg=91.03, stdev= 7.29 00:13:11.853 lat (msec): min=32, max=159, avg=92.46, stdev= 7.38 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 87], 00:13:11.853 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:11.853 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 101], 00:13:11.853 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 148], 99.95th=[ 161], 00:13:11.853 | 99.99th=[ 161] 00:13:11.853 bw ( KiB/s): min=162304, max=182272, per=8.47%, avg=176622.65, stdev=4439.14, samples=20 00:13:11.853 iops : min= 634, max= 712, avg=689.85, stdev=17.35, samples=20 00:13:11.853 lat (msec) : 50=0.40%, 100=94.17%, 250=5.43% 00:13:11.853 cpu : usr=0.27%, sys=2.77%, ctx=1662, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=6963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job5: (groupid=0, jobs=1): err= 0: pid=78786: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=471, BW=118MiB/s (124MB/s)(1192MiB/10111msec) 00:13:11.853 slat (usec): min=20, max=42654, avg=2093.82, stdev=4901.78 00:13:11.853 clat (msec): min=39, max=249, avg=133.42, stdev=24.19 00:13:11.853 lat (msec): min=39, max=249, avg=135.52, stdev=24.73 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 78], 5.00th=[ 87], 10.00th=[ 92], 20.00th=[ 115], 00:13:11.853 | 30.00th=[ 128], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 144], 00:13:11.853 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:13:11.853 | 99.00th=[ 178], 99.50th=[ 199], 99.90th=[ 243], 99.95th=[ 251], 00:13:11.853 | 99.99th=[ 251] 00:13:11.853 bw ( KiB/s): min=105984, max=177664, per=5.77%, avg=120438.90, stdev=20654.60, samples=20 00:13:11.853 iops : min= 414, max= 694, avg=470.40, stdev=80.67, samples=20 00:13:11.853 lat (msec) : 50=0.52%, 100=14.91%, 250=84.57% 00:13:11.853 cpu : usr=0.24%, sys=1.87%, ctx=1144, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=4769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job6: (groupid=0, jobs=1): err= 0: pid=78787: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=688, BW=172MiB/s (181MB/s)(1734MiB/10073msec) 00:13:11.853 slat (usec): min=19, max=19982, avg=1436.33, stdev=2987.09 00:13:11.853 clat (msec): min=19, max=156, avg=91.37, stdev= 6.99 00:13:11.853 lat (msec): min=20, max=156, avg=92.80, stdev= 7.08 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 77], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:13:11.853 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:11.853 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 102], 00:13:11.853 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 153], 99.95th=[ 157], 00:13:11.853 | 99.99th=[ 157] 00:13:11.853 bw ( KiB/s): min=161980, max=182784, per=8.43%, avg=175915.10, stdev=4817.67, samples=20 00:13:11.853 iops : min= 632, max= 714, avg=687.05, stdev=18.95, samples=20 00:13:11.853 lat (msec) : 20=0.01%, 50=0.13%, 100=93.76%, 250=6.10% 00:13:11.853 cpu : usr=0.41%, sys=2.95%, ctx=1641, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=6936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job7: (groupid=0, jobs=1): err= 0: pid=78788: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=989, BW=247MiB/s (259MB/s)(2477MiB/10015msec) 00:13:11.853 slat (usec): min=19, max=44738, avg=1004.78, stdev=2275.23 00:13:11.853 clat (msec): min=13, max=121, avg=63.61, stdev=10.18 00:13:11.853 lat (msec): min=19, max=121, avg=64.61, stdev=10.26 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:13:11.853 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:11.853 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 89], 00:13:11.853 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 121], 00:13:11.853 | 99.99th=[ 122] 00:13:11.853 bw ( KiB/s): min=158012, max=270848, per=12.08%, avg=251969.60, stdev=31527.88, samples=20 00:13:11.853 iops : min= 617, max= 1058, avg=984.10, stdev=123.15, samples=20 00:13:11.853 lat (msec) : 20=0.04%, 50=0.92%, 100=97.17%, 250=1.87% 00:13:11.853 cpu : usr=0.49%, sys=3.57%, ctx=2135, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=9908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job8: (groupid=0, jobs=1): err= 0: pid=78789: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=689, BW=172MiB/s (181MB/s)(1737MiB/10073msec) 00:13:11.853 slat (usec): min=15, max=39716, avg=1435.58, stdev=3013.98 00:13:11.853 clat (msec): min=17, max=154, avg=91.22, stdev= 7.06 00:13:11.853 lat (msec): min=17, max=154, avg=92.66, stdev= 7.15 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:13:11.853 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:11.853 | 70.00th=[ 94], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 101], 00:13:11.853 | 99.00th=[ 109], 99.50th=[ 114], 99.90th=[ 153], 99.95th=[ 155], 00:13:11.853 | 99.99th=[ 155] 00:13:11.853 bw ( KiB/s): min=165376, max=179712, per=8.45%, avg=176202.75, stdev=3898.41, samples=20 00:13:11.853 iops : min= 646, max= 702, avg=688.20, stdev=15.20, samples=20 00:13:11.853 lat (msec) : 20=0.04%, 50=0.24%, 100=94.55%, 250=5.17% 00:13:11.853 cpu : usr=0.32%, sys=2.28%, ctx=1699, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=6948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job9: (groupid=0, jobs=1): err= 0: pid=78790: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=472, BW=118MiB/s (124MB/s)(1195MiB/10113msec) 00:13:11.853 slat (usec): min=19, max=57307, avg=2079.47, stdev=5412.82 00:13:11.853 clat (msec): min=50, max=277, avg=133.20, stdev=24.86 00:13:11.853 lat (msec): min=50, max=277, avg=135.28, stdev=25.53 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 61], 5.00th=[ 83], 10.00th=[ 92], 20.00th=[ 115], 00:13:11.853 | 30.00th=[ 127], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:13:11.853 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:13:11.853 | 99.00th=[ 180], 99.50th=[ 199], 99.90th=[ 234], 99.95th=[ 234], 00:13:11.853 | 99.99th=[ 279] 00:13:11.853 bw ( KiB/s): min=100352, max=178176, per=5.79%, avg=120686.00, stdev=21680.67, samples=20 00:13:11.853 iops : min= 392, max= 696, avg=471.35, stdev=84.64, samples=20 00:13:11.853 lat (msec) : 100=14.27%, 250=85.71%, 500=0.02% 00:13:11.853 cpu : usr=0.22%, sys=1.82%, ctx=1144, majf=0, minf=4097 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=4779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 job10: (groupid=0, jobs=1): err= 0: pid=78791: Fri Dec 6 11:09:20 2024 00:13:11.853 read: IOPS=563, BW=141MiB/s (148MB/s)(1423MiB/10108msec) 00:13:11.853 slat (usec): min=19, max=75327, avg=1744.94, stdev=4831.49 00:13:11.853 clat (msec): min=16, max=252, avg=111.78, stdev=42.86 00:13:11.853 lat (msec): min=17, max=252, avg=113.53, stdev=43.64 00:13:11.853 clat percentiles (msec): 00:13:11.853 | 1.00th=[ 50], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 63], 00:13:11.853 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 142], 60.00th=[ 144], 00:13:11.853 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 150], 95.00th=[ 155], 00:13:11.853 | 99.00th=[ 178], 99.50th=[ 201], 99.90th=[ 249], 99.95th=[ 253], 00:13:11.853 | 99.99th=[ 253] 00:13:11.853 bw ( KiB/s): min=101376, max=276480, per=6.91%, avg=144055.00, stdev=61477.60, samples=20 00:13:11.853 iops : min= 396, max= 1080, avg=562.65, stdev=240.08, samples=20 00:13:11.853 lat (msec) : 20=0.09%, 50=1.05%, 100=40.64%, 250=58.16%, 500=0.05% 00:13:11.853 cpu : usr=0.32%, sys=2.00%, ctx=1332, majf=0, minf=4098 00:13:11.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:11.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:11.853 issued rwts: total=5691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.853 00:13:11.853 Run status group 0 (all jobs): 00:13:11.853 READ: bw=2037MiB/s (2136MB/s), 118MiB/s-424MiB/s (123MB/s-445MB/s), io=20.1GiB (21.6GB), run=10010-10113msec 00:13:11.853 00:13:11.853 Disk stats (read/write): 00:13:11.853 nvme0n1: ios=19690/0, merge=0/0, ticks=1240392/0, in_queue=1240392, util=97.79% 00:13:11.853 nvme10n1: ios=9399/0, merge=0/0, ticks=1226261/0, in_queue=1226261, util=97.94% 00:13:11.853 nvme1n1: ios=9441/0, merge=0/0, ticks=1225022/0, in_queue=1225022, util=98.12% 00:13:11.853 nvme2n1: ios=32925/0, merge=0/0, ticks=1210803/0, in_queue=1210803, util=98.17% 00:13:11.853 nvme3n1: ios=13812/0, merge=0/0, ticks=1231281/0, in_queue=1231281, util=98.27% 00:13:11.853 nvme4n1: ios=9421/0, merge=0/0, ticks=1224439/0, in_queue=1224439, util=98.55% 00:13:11.853 nvme5n1: ios=13751/0, merge=0/0, ticks=1230083/0, in_queue=1230083, util=98.58% 00:13:11.853 nvme6n1: ios=19220/0, merge=0/0, ticks=1208707/0, in_queue=1208707, util=98.68% 00:13:11.853 nvme7n1: ios=13783/0, merge=0/0, ticks=1231780/0, in_queue=1231780, util=99.01% 00:13:11.853 nvme8n1: ios=9434/0, merge=0/0, ticks=1225294/0, in_queue=1225294, util=99.10% 00:13:11.853 nvme9n1: ios=11268/0, merge=0/0, ticks=1228025/0, in_queue=1228025, util=99.22% 00:13:11.853 11:09:20 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:11.853 [global] 00:13:11.853 thread=1 00:13:11.853 invalidate=1 00:13:11.853 rw=randwrite 00:13:11.853 time_based=1 00:13:11.853 runtime=10 00:13:11.853 ioengine=libaio 00:13:11.853 direct=1 00:13:11.853 bs=262144 00:13:11.853 iodepth=64 00:13:11.853 norandommap=1 00:13:11.853 numjobs=1 00:13:11.853 00:13:11.853 [job0] 00:13:11.853 filename=/dev/nvme0n1 00:13:11.853 [job1] 00:13:11.853 filename=/dev/nvme10n1 00:13:11.853 [job2] 00:13:11.853 filename=/dev/nvme1n1 00:13:11.853 [job3] 00:13:11.853 filename=/dev/nvme2n1 00:13:11.853 [job4] 00:13:11.853 filename=/dev/nvme3n1 00:13:11.853 [job5] 00:13:11.853 filename=/dev/nvme4n1 00:13:11.853 [job6] 00:13:11.853 filename=/dev/nvme5n1 00:13:11.853 [job7] 00:13:11.854 filename=/dev/nvme6n1 00:13:11.854 [job8] 00:13:11.854 filename=/dev/nvme7n1 00:13:11.854 [job9] 00:13:11.854 filename=/dev/nvme8n1 00:13:11.854 [job10] 00:13:11.854 filename=/dev/nvme9n1 00:13:11.854 Could not set queue depth (nvme0n1) 00:13:11.854 Could not set queue depth (nvme10n1) 00:13:11.854 Could not set queue depth (nvme1n1) 00:13:11.854 Could not set queue depth (nvme2n1) 00:13:11.854 Could not set queue depth (nvme3n1) 00:13:11.854 Could not set queue depth (nvme4n1) 00:13:11.854 Could not set queue depth (nvme5n1) 00:13:11.854 Could not set queue depth (nvme6n1) 00:13:11.854 Could not set queue depth (nvme7n1) 00:13:11.854 Could not set queue depth (nvme8n1) 00:13:11.854 Could not set queue depth (nvme9n1) 00:13:11.854 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:11.854 fio-3.35 00:13:11.854 Starting 11 threads 00:13:21.830 00:13:21.830 job0: (groupid=0, jobs=1): err= 0: pid=78996: Fri Dec 6 11:09:31 2024 00:13:21.830 write: IOPS=647, BW=162MiB/s (170MB/s)(1631MiB/10080msec); 0 zone resets 00:13:21.830 slat (usec): min=17, max=59961, avg=1509.85, stdev=2727.92 00:13:21.831 clat (msec): min=15, max=189, avg=97.34, stdev=19.09 00:13:21.831 lat (msec): min=15, max=190, avg=98.85, stdev=19.22 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 61], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:21.831 | 30.00th=[ 90], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:21.831 | 70.00th=[ 92], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 130], 00:13:21.831 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 180], 00:13:21.831 | 99.99th=[ 190] 00:13:21.831 bw ( KiB/s): min=94019, max=182784, per=11.26%, avg=165392.15, stdev=26966.86, samples=20 00:13:21.831 iops : min= 367, max= 714, avg=646.05, stdev=105.38, samples=20 00:13:21.831 lat (msec) : 20=0.17%, 50=0.69%, 100=77.51%, 250=21.63% 00:13:21.831 cpu : usr=1.23%, sys=1.95%, ctx=7453, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,6524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job1: (groupid=0, jobs=1): err= 0: pid=78997: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=689, BW=172MiB/s (181MB/s)(1737MiB/10082msec); 0 zone resets 00:13:21.831 slat (usec): min=14, max=76442, avg=1433.57, stdev=2569.86 00:13:21.831 clat (msec): min=78, max=175, avg=91.39, stdev= 7.02 00:13:21.831 lat (msec): min=79, max=175, avg=92.82, stdev= 6.64 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 88], 00:13:21.831 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:21.831 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 95], 95.00th=[ 95], 00:13:21.831 | 99.00th=[ 121], 99.50th=[ 150], 99.90th=[ 171], 99.95th=[ 174], 00:13:21.831 | 99.99th=[ 176] 00:13:21.831 bw ( KiB/s): min=145699, max=182272, per=12.00%, avg=176296.15, stdev=7697.01, samples=20 00:13:21.831 iops : min= 569, max= 712, avg=688.65, stdev=30.10, samples=20 00:13:21.831 lat (msec) : 100=98.03%, 250=1.97% 00:13:21.831 cpu : usr=1.06%, sys=1.96%, ctx=11211, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,6949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job2: (groupid=0, jobs=1): err= 0: pid=79008: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=643, BW=161MiB/s (169MB/s)(1623MiB/10084msec); 0 zone resets 00:13:21.831 slat (usec): min=14, max=23243, avg=1535.41, stdev=2674.53 00:13:21.831 clat (msec): min=20, max=172, avg=97.81, stdev=18.25 00:13:21.831 lat (msec): min=20, max=172, avg=99.35, stdev=18.34 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:21.831 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:21.831 | 70.00th=[ 93], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 129], 00:13:21.831 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 167], 00:13:21.831 | 99.99th=[ 174] 00:13:21.831 bw ( KiB/s): min=108544, max=182784, per=11.21%, avg=164582.40, stdev=25952.99, samples=20 00:13:21.831 iops : min= 424, max= 714, avg=642.90, stdev=101.38, samples=20 00:13:21.831 lat (msec) : 50=0.37%, 100=77.22%, 250=22.41% 00:13:21.831 cpu : usr=1.00%, sys=1.72%, ctx=6018, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,6492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job3: (groupid=0, jobs=1): err= 0: pid=79010: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=647, BW=162MiB/s (170MB/s)(1632MiB/10082msec); 0 zone resets 00:13:21.831 slat (usec): min=17, max=15143, avg=1509.78, stdev=2637.82 00:13:21.831 clat (msec): min=17, max=172, avg=97.30, stdev=17.46 00:13:21.831 lat (msec): min=17, max=172, avg=98.81, stdev=17.53 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:21.831 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:13:21.831 | 70.00th=[ 93], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 129], 00:13:21.831 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 167], 00:13:21.831 | 99.99th=[ 174] 00:13:21.831 bw ( KiB/s): min=123392, max=184832, per=11.27%, avg=165516.60, stdev=23921.66, samples=20 00:13:21.831 iops : min= 482, max= 722, avg=646.50, stdev=93.53, samples=20 00:13:21.831 lat (msec) : 20=0.06%, 50=0.44%, 100=77.16%, 250=22.33% 00:13:21.831 cpu : usr=0.91%, sys=1.56%, ctx=8553, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,6528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job4: (groupid=0, jobs=1): err= 0: pid=79011: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=820, BW=205MiB/s (215MB/s)(2069MiB/10081msec); 0 zone resets 00:13:21.831 slat (usec): min=17, max=7971, avg=1202.86, stdev=2088.29 00:13:21.831 clat (msec): min=5, max=172, avg=76.74, stdev=17.48 00:13:21.831 lat (msec): min=5, max=172, avg=77.94, stdev=17.64 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:13:21.831 | 30.00th=[ 58], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 89], 00:13:21.831 | 70.00th=[ 91], 80.00th=[ 91], 90.00th=[ 92], 95.00th=[ 93], 00:13:21.831 | 99.00th=[ 96], 99.50th=[ 114], 99.90th=[ 161], 99.95th=[ 167], 00:13:21.831 | 99.99th=[ 174] 00:13:21.831 bw ( KiB/s): min=174592, max=291328, per=14.31%, avg=210201.60, stdev=47951.51, samples=20 00:13:21.831 iops : min= 682, max= 1138, avg=821.10, stdev=187.31, samples=20 00:13:21.831 lat (msec) : 10=0.02%, 20=0.15%, 50=0.44%, 100=98.79%, 250=0.60% 00:13:21.831 cpu : usr=1.36%, sys=2.32%, ctx=10199, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,8274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job5: (groupid=0, jobs=1): err= 0: pid=79012: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=332, BW=83.2MiB/s (87.3MB/s)(847MiB/10177msec); 0 zone resets 00:13:21.831 slat (usec): min=17, max=42747, avg=2906.14, stdev=5164.01 00:13:21.831 clat (msec): min=18, max=369, avg=189.25, stdev=27.16 00:13:21.831 lat (msec): min=18, max=369, avg=192.16, stdev=27.19 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 65], 5.00th=[ 153], 10.00th=[ 176], 20.00th=[ 184], 00:13:21.831 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 197], 00:13:21.831 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 203], 00:13:21.831 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:13:21.831 | 99.99th=[ 372] 00:13:21.831 bw ( KiB/s): min=80384, max=106496, per=5.80%, avg=85120.00, stdev=6505.84, samples=20 00:13:21.831 iops : min= 314, max= 416, avg=332.50, stdev=25.41, samples=20 00:13:21.831 lat (msec) : 20=0.12%, 50=0.59%, 100=1.62%, 250=96.55%, 500=1.12% 00:13:21.831 cpu : usr=0.62%, sys=0.78%, ctx=3560, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,3388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job6: (groupid=0, jobs=1): err= 0: pid=79013: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=324, BW=81.2MiB/s (85.1MB/s)(826MiB/10178msec); 0 zone resets 00:13:21.831 slat (usec): min=20, max=39436, avg=3022.32, stdev=5328.45 00:13:21.831 clat (msec): min=17, max=372, avg=194.05, stdev=25.32 00:13:21.831 lat (msec): min=17, max=372, avg=197.07, stdev=25.13 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 81], 5.00th=[ 159], 10.00th=[ 182], 20.00th=[ 188], 00:13:21.831 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 199], 00:13:21.831 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 222], 00:13:21.831 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:13:21.831 | 99.99th=[ 372] 00:13:21.831 bw ( KiB/s): min=75776, max=105984, per=5.65%, avg=82969.60, stdev=5923.28, samples=20 00:13:21.831 iops : min= 296, max= 414, avg=324.10, stdev=23.14, samples=20 00:13:21.831 lat (msec) : 20=0.09%, 50=0.48%, 100=0.73%, 250=97.55%, 500=1.15% 00:13:21.831 cpu : usr=0.74%, sys=0.90%, ctx=1887, majf=0, minf=1 00:13:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.831 issued rwts: total=0,3304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.831 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.831 job7: (groupid=0, jobs=1): err= 0: pid=79014: Fri Dec 6 11:09:31 2024 00:13:21.831 write: IOPS=326, BW=81.6MiB/s (85.6MB/s)(830MiB/10169msec); 0 zone resets 00:13:21.831 slat (usec): min=16, max=61026, avg=3008.13, stdev=5299.95 00:13:21.831 clat (msec): min=63, max=363, avg=192.93, stdev=18.54 00:13:21.831 lat (msec): min=63, max=363, avg=195.94, stdev=18.03 00:13:21.831 clat percentiles (msec): 00:13:21.831 | 1.00th=[ 142], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 186], 00:13:21.831 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:13:21.831 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 203], 00:13:21.832 | 99.00th=[ 264], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:13:21.832 | 99.99th=[ 363] 00:13:21.832 bw ( KiB/s): min=80384, max=86016, per=5.68%, avg=83379.20, stdev=1720.51, samples=20 00:13:21.832 iops : min= 314, max= 336, avg=325.70, stdev= 6.72, samples=20 00:13:21.832 lat (msec) : 100=0.48%, 250=98.37%, 500=1.14% 00:13:21.832 cpu : usr=0.57%, sys=0.93%, ctx=3982, majf=0, minf=1 00:13:21.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:21.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.832 issued rwts: total=0,3320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.832 job8: (groupid=0, jobs=1): err= 0: pid=79015: Fri Dec 6 11:09:31 2024 00:13:21.832 write: IOPS=692, BW=173MiB/s (181MB/s)(1745MiB/10081msec); 0 zone resets 00:13:21.832 slat (usec): min=18, max=27125, avg=1427.93, stdev=2432.96 00:13:21.832 clat (msec): min=34, max=174, avg=90.96, stdev= 6.25 00:13:21.832 lat (msec): min=34, max=174, avg=92.39, stdev= 5.87 00:13:21.832 clat percentiles (msec): 00:13:21.832 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:21.832 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:21.832 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 95], 95.00th=[ 96], 00:13:21.832 | 99.00th=[ 107], 99.50th=[ 122], 99.90th=[ 163], 99.95th=[ 169], 00:13:21.832 | 99.99th=[ 176] 00:13:21.832 bw ( KiB/s): min=161792, max=182272, per=12.06%, avg=177049.60, stdev=4362.21, samples=20 00:13:21.832 iops : min= 632, max= 712, avg=691.60, stdev=17.04, samples=20 00:13:21.832 lat (msec) : 50=0.21%, 100=97.98%, 250=1.81% 00:13:21.832 cpu : usr=1.10%, sys=1.53%, ctx=8541, majf=0, minf=1 00:13:21.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:21.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.832 issued rwts: total=0,6979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.832 job9: (groupid=0, jobs=1): err= 0: pid=79016: Fri Dec 6 11:09:31 2024 00:13:21.832 write: IOPS=326, BW=81.7MiB/s (85.7MB/s)(831MiB/10173msec); 0 zone resets 00:13:21.832 slat (usec): min=16, max=33971, avg=3004.34, stdev=5252.85 00:13:21.832 clat (msec): min=28, max=370, avg=192.78, stdev=22.67 00:13:21.832 lat (msec): min=28, max=370, avg=195.78, stdev=22.39 00:13:21.832 clat percentiles (msec): 00:13:21.832 | 1.00th=[ 96], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 186], 00:13:21.832 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:13:21.832 | 70.00th=[ 201], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 207], 00:13:21.832 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:13:21.832 | 99.99th=[ 372] 00:13:21.832 bw ( KiB/s): min=79872, max=98304, per=5.68%, avg=83481.60, stdev=3900.96, samples=20 00:13:21.832 iops : min= 312, max= 384, avg=326.10, stdev=15.24, samples=20 00:13:21.832 lat (msec) : 50=0.36%, 100=0.72%, 250=97.77%, 500=1.14% 00:13:21.832 cpu : usr=0.51%, sys=0.80%, ctx=4727, majf=0, minf=1 00:13:21.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:21.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.832 issued rwts: total=0,3324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.832 job10: (groupid=0, jobs=1): err= 0: pid=79017: Fri Dec 6 11:09:31 2024 00:13:21.832 write: IOPS=324, BW=81.2MiB/s (85.1MB/s)(826MiB/10178msec); 0 zone resets 00:13:21.832 slat (usec): min=19, max=43373, avg=3022.36, stdev=5316.62 00:13:21.832 clat (msec): min=19, max=368, avg=193.99, stdev=23.84 00:13:21.832 lat (msec): min=19, max=368, avg=197.01, stdev=23.61 00:13:21.832 clat percentiles (msec): 00:13:21.832 | 1.00th=[ 83], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 188], 00:13:21.832 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 199], 00:13:21.832 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 211], 00:13:21.832 | 99.00th=[ 268], 99.50th=[ 321], 99.90th=[ 355], 99.95th=[ 368], 00:13:21.832 | 99.99th=[ 368] 00:13:21.832 bw ( KiB/s): min=77824, max=98501, per=5.65%, avg=82979.45, stdev=4102.96, samples=20 00:13:21.832 iops : min= 304, max= 384, avg=324.10, stdev=15.87, samples=20 00:13:21.832 lat (msec) : 20=0.12%, 50=0.48%, 100=0.61%, 250=97.64%, 500=1.15% 00:13:21.832 cpu : usr=0.54%, sys=1.17%, ctx=4433, majf=0, minf=1 00:13:21.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:21.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:21.832 issued rwts: total=0,3305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.832 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.832 00:13:21.832 Run status group 0 (all jobs): 00:13:21.832 WRITE: bw=1434MiB/s (1504MB/s), 81.2MiB/s-205MiB/s (85.1MB/s-215MB/s), io=14.3GiB (15.3GB), run=10080-10178msec 00:13:21.832 00:13:21.832 Disk stats (read/write): 00:13:21.832 nvme0n1: ios=49/12878, merge=0/0, ticks=71/1213017, in_queue=1213088, util=97.83% 00:13:21.832 nvme10n1: ios=49/13730, merge=0/0, ticks=45/1213669, in_queue=1213714, util=97.82% 00:13:21.832 nvme1n1: ios=46/12817, merge=0/0, ticks=36/1213303, in_queue=1213339, util=98.03% 00:13:21.832 nvme2n1: ios=24/12891, merge=0/0, ticks=39/1214719, in_queue=1214758, util=97.97% 00:13:21.832 nvme3n1: ios=23/16374, merge=0/0, ticks=30/1211641, in_queue=1211671, util=97.91% 00:13:21.832 nvme4n1: ios=5/6633, merge=0/0, ticks=30/1208863, in_queue=1208893, util=98.26% 00:13:21.832 nvme5n1: ios=0/6468, merge=0/0, ticks=0/1209041, in_queue=1209041, util=98.42% 00:13:21.832 nvme6n1: ios=0/6489, merge=0/0, ticks=0/1206734, in_queue=1206734, util=98.33% 00:13:21.832 nvme7n1: ios=0/13794, merge=0/0, ticks=0/1213278, in_queue=1213278, util=98.67% 00:13:21.832 nvme8n1: ios=0/6504, merge=0/0, ticks=0/1207244, in_queue=1207244, util=98.82% 00:13:21.832 nvme9n1: ios=0/6464, merge=0/0, ticks=0/1207654, in_queue=1207654, util=98.94% 00:13:21.832 11:09:31 -- target/multiconnection.sh@36 -- # sync 00:13:21.832 11:09:31 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:21.832 11:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.832 11:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.832 11:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:21.832 11:09:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.832 11:09:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.832 11:09:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:21.832 11:09:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.832 11:09:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:21.832 11:09:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.832 11:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.832 11:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.832 11:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:21.832 11:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.832 11:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.832 11:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:21.832 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:21.832 11:09:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:21.832 11:09:31 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.832 11:09:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.832 11:09:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:21.832 11:09:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.832 11:09:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:21.832 11:09:31 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.832 11:09:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:21.832 11:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.832 11:09:31 -- common/autotest_common.sh@10 -- # set +x 00:13:21.832 11:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.832 11:09:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.832 11:09:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:21.832 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:21.832 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:21.832 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.832 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.832 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:21.832 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.832 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:21.832 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.832 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:21.832 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.832 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.832 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.832 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.832 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:21.832 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:21.832 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:21.832 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.832 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.832 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:21.832 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.832 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:21.832 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.832 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:21.832 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.832 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.832 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.832 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.832 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:21.832 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:21.833 11:09:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:21.833 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:21.833 11:09:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:21.833 11:09:32 -- common/autotest_common.sh@1208 -- # local i=0 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:21.833 11:09:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:21.833 11:09:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:21.833 11:09:32 -- common/autotest_common.sh@1220 -- # return 0 00:13:21.833 11:09:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:21.833 11:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.833 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.833 11:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.833 11:09:32 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:21.833 11:09:32 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:21.833 11:09:32 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:21.833 11:09:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:21.833 11:09:32 -- nvmf/common.sh@116 -- # sync 00:13:21.833 11:09:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:21.833 11:09:32 -- nvmf/common.sh@119 -- # set +e 00:13:21.833 11:09:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:21.833 11:09:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:21.833 rmmod nvme_tcp 00:13:21.833 rmmod nvme_fabrics 00:13:21.833 rmmod nvme_keyring 00:13:21.833 11:09:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:21.833 11:09:32 -- nvmf/common.sh@123 -- # set -e 00:13:21.833 11:09:32 -- nvmf/common.sh@124 -- # return 0 00:13:21.833 11:09:32 -- nvmf/common.sh@477 -- # '[' -n 78335 ']' 00:13:21.833 11:09:32 -- nvmf/common.sh@478 -- # killprocess 78335 00:13:21.833 11:09:32 -- common/autotest_common.sh@936 -- # '[' -z 78335 ']' 00:13:21.833 11:09:32 -- common/autotest_common.sh@940 -- # kill -0 78335 00:13:21.833 11:09:32 -- common/autotest_common.sh@941 -- # uname 00:13:21.833 11:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.833 11:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78335 00:13:21.833 killing process with pid 78335 00:13:21.833 11:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.833 11:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.833 11:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78335' 00:13:21.833 11:09:32 -- common/autotest_common.sh@955 -- # kill 78335 00:13:21.833 11:09:32 -- common/autotest_common.sh@960 -- # wait 78335 00:13:22.093 11:09:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:22.093 11:09:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:22.093 11:09:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:22.093 11:09:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.093 11:09:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:22.093 11:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.093 11:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.093 11:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.093 11:09:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:22.093 00:13:22.093 real 0m48.264s 00:13:22.093 user 2m35.175s 00:13:22.093 sys 0m36.600s 00:13:22.093 11:09:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.093 ************************************ 00:13:22.093 END TEST nvmf_multiconnection 00:13:22.093 ************************************ 00:13:22.093 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.093 11:09:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:22.093 11:09:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.093 11:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.093 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.093 ************************************ 00:13:22.093 START TEST nvmf_initiator_timeout 00:13:22.093 ************************************ 00:13:22.093 11:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:22.353 * Looking for test storage... 00:13:22.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:22.353 11:09:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:22.353 11:09:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:22.353 11:09:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:22.353 11:09:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:22.353 11:09:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:22.353 11:09:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:22.353 11:09:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:22.353 11:09:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:22.353 11:09:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:22.353 11:09:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.353 11:09:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:22.353 11:09:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:22.353 11:09:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:22.353 11:09:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:22.353 11:09:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:22.353 11:09:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:22.353 11:09:33 -- scripts/common.sh@344 -- # : 1 00:13:22.353 11:09:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:22.353 11:09:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.353 11:09:33 -- scripts/common.sh@364 -- # decimal 1 00:13:22.353 11:09:33 -- scripts/common.sh@352 -- # local d=1 00:13:22.353 11:09:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.353 11:09:33 -- scripts/common.sh@354 -- # echo 1 00:13:22.353 11:09:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:22.353 11:09:33 -- scripts/common.sh@365 -- # decimal 2 00:13:22.353 11:09:33 -- scripts/common.sh@352 -- # local d=2 00:13:22.353 11:09:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.353 11:09:33 -- scripts/common.sh@354 -- # echo 2 00:13:22.353 11:09:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:22.353 11:09:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:22.353 11:09:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:22.353 11:09:33 -- scripts/common.sh@367 -- # return 0 00:13:22.353 11:09:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.353 11:09:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.353 --rc genhtml_branch_coverage=1 00:13:22.353 --rc genhtml_function_coverage=1 00:13:22.353 --rc genhtml_legend=1 00:13:22.353 --rc geninfo_all_blocks=1 00:13:22.353 --rc geninfo_unexecuted_blocks=1 00:13:22.353 00:13:22.353 ' 00:13:22.353 11:09:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.353 --rc genhtml_branch_coverage=1 00:13:22.353 --rc genhtml_function_coverage=1 00:13:22.353 --rc genhtml_legend=1 00:13:22.353 --rc geninfo_all_blocks=1 00:13:22.353 --rc geninfo_unexecuted_blocks=1 00:13:22.353 00:13:22.353 ' 00:13:22.353 11:09:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.353 --rc genhtml_branch_coverage=1 00:13:22.353 --rc genhtml_function_coverage=1 00:13:22.353 --rc genhtml_legend=1 00:13:22.353 --rc geninfo_all_blocks=1 00:13:22.353 --rc geninfo_unexecuted_blocks=1 00:13:22.353 00:13:22.353 ' 00:13:22.353 11:09:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.353 --rc genhtml_branch_coverage=1 00:13:22.353 --rc genhtml_function_coverage=1 00:13:22.353 --rc genhtml_legend=1 00:13:22.353 --rc geninfo_all_blocks=1 00:13:22.353 --rc geninfo_unexecuted_blocks=1 00:13:22.353 00:13:22.353 ' 00:13:22.353 11:09:33 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:22.353 11:09:33 -- nvmf/common.sh@7 -- # uname -s 00:13:22.353 11:09:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.353 11:09:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.353 11:09:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.353 11:09:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.353 11:09:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.353 11:09:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.353 11:09:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.353 11:09:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.353 11:09:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.353 11:09:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.353 11:09:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:13:22.353 11:09:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:13:22.353 11:09:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.353 11:09:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.353 11:09:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:22.353 11:09:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:22.353 11:09:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.353 11:09:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.353 11:09:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.353 11:09:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.353 11:09:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.353 11:09:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.353 11:09:33 -- paths/export.sh@5 -- # export PATH 00:13:22.353 11:09:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.353 11:09:33 -- nvmf/common.sh@46 -- # : 0 00:13:22.353 11:09:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:22.353 11:09:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:22.353 11:09:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:22.353 11:09:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.353 11:09:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.353 11:09:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:22.353 11:09:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:22.353 11:09:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:22.354 11:09:33 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.354 11:09:33 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.354 11:09:33 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:22.354 11:09:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:22.354 11:09:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.354 11:09:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:22.354 11:09:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:22.354 11:09:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:22.354 11:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.354 11:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.354 11:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.354 11:09:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:22.354 11:09:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:22.354 11:09:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:22.354 11:09:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:22.354 11:09:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:22.354 11:09:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:22.354 11:09:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.354 11:09:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.354 11:09:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:22.354 11:09:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:22.354 11:09:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:22.354 11:09:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:22.354 11:09:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:22.354 11:09:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.354 11:09:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:22.354 11:09:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:22.354 11:09:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:22.354 11:09:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:22.354 11:09:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:22.354 11:09:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:22.354 Cannot find device "nvmf_tgt_br" 00:13:22.354 11:09:33 -- nvmf/common.sh@154 -- # true 00:13:22.354 11:09:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:22.354 Cannot find device "nvmf_tgt_br2" 00:13:22.354 11:09:33 -- nvmf/common.sh@155 -- # true 00:13:22.354 11:09:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:22.354 11:09:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:22.354 Cannot find device "nvmf_tgt_br" 00:13:22.354 11:09:33 -- nvmf/common.sh@157 -- # true 00:13:22.354 11:09:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:22.614 Cannot find device "nvmf_tgt_br2" 00:13:22.614 11:09:33 -- nvmf/common.sh@158 -- # true 00:13:22.614 11:09:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:22.614 11:09:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:22.614 11:09:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:22.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:22.614 11:09:33 -- nvmf/common.sh@161 -- # true 00:13:22.614 11:09:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:22.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:22.614 11:09:33 -- nvmf/common.sh@162 -- # true 00:13:22.614 11:09:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:22.614 11:09:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:22.614 11:09:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:22.614 11:09:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:22.614 11:09:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:22.614 11:09:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:22.614 11:09:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:22.614 11:09:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:22.614 11:09:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:22.614 11:09:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:22.614 11:09:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:22.614 11:09:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:22.614 11:09:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:22.614 11:09:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:22.614 11:09:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:22.614 11:09:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:22.614 11:09:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:22.614 11:09:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:22.614 11:09:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:22.614 11:09:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:22.614 11:09:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:22.614 11:09:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:22.614 11:09:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:22.614 11:09:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:22.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:22.614 00:13:22.614 --- 10.0.0.2 ping statistics --- 00:13:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.614 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:22.614 11:09:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:22.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:22.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:22.614 00:13:22.614 --- 10.0.0.3 ping statistics --- 00:13:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.614 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:22.614 11:09:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:22.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:22.614 00:13:22.614 --- 10.0.0.1 ping statistics --- 00:13:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.614 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:22.614 11:09:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.614 11:09:33 -- nvmf/common.sh@421 -- # return 0 00:13:22.614 11:09:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:22.614 11:09:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.614 11:09:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:22.614 11:09:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:22.614 11:09:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.614 11:09:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:22.614 11:09:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:22.874 11:09:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:22.875 11:09:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:22.875 11:09:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.875 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.875 11:09:33 -- nvmf/common.sh@469 -- # nvmfpid=79390 00:13:22.875 11:09:33 -- nvmf/common.sh@470 -- # waitforlisten 79390 00:13:22.875 11:09:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.875 11:09:33 -- common/autotest_common.sh@829 -- # '[' -z 79390 ']' 00:13:22.875 11:09:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.875 11:09:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.875 11:09:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.875 11:09:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.875 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.875 [2024-12-06 11:09:33.836031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.875 [2024-12-06 11:09:33.836152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.875 [2024-12-06 11:09:33.980911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.875 [2024-12-06 11:09:34.016024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.875 [2024-12-06 11:09:34.016424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.875 [2024-12-06 11:09:34.016606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.875 [2024-12-06 11:09:34.016720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.875 [2024-12-06 11:09:34.016816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.875 [2024-12-06 11:09:34.016978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.875 [2024-12-06 11:09:34.017480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.875 [2024-12-06 11:09:34.017516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.810 11:09:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.810 11:09:34 -- common/autotest_common.sh@862 -- # return 0 00:13:23.810 11:09:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:23.810 11:09:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 11:09:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 Malloc0 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 Delay0 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 [2024-12-06 11:09:34.894755] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.810 11:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 11:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 [2024-12-06 11:09:34.922938] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.810 11:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 11:09:34 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.070 11:09:35 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.070 11:09:35 -- common/autotest_common.sh@1187 -- # local i=0 00:13:24.070 11:09:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.070 11:09:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:24.070 11:09:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:25.971 11:09:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:25.971 11:09:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:25.971 11:09:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.971 11:09:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:25.971 11:09:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.971 11:09:37 -- common/autotest_common.sh@1197 -- # return 0 00:13:25.971 11:09:37 -- target/initiator_timeout.sh@35 -- # fio_pid=79454 00:13:25.971 11:09:37 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:25.971 11:09:37 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:25.971 [global] 00:13:25.971 thread=1 00:13:25.971 invalidate=1 00:13:25.971 rw=write 00:13:25.971 time_based=1 00:13:25.971 runtime=60 00:13:25.971 ioengine=libaio 00:13:25.971 direct=1 00:13:25.971 bs=4096 00:13:25.971 iodepth=1 00:13:25.971 norandommap=0 00:13:25.971 numjobs=1 00:13:25.971 00:13:25.971 verify_dump=1 00:13:25.971 verify_backlog=512 00:13:25.971 verify_state_save=0 00:13:25.971 do_verify=1 00:13:25.971 verify=crc32c-intel 00:13:25.971 [job0] 00:13:25.971 filename=/dev/nvme0n1 00:13:26.229 Could not set queue depth (nvme0n1) 00:13:26.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.229 fio-3.35 00:13:26.229 Starting 1 thread 00:13:29.516 11:09:40 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:29.516 11:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.516 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:29.516 true 00:13:29.516 11:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.516 11:09:40 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:29.516 11:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.516 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:29.516 true 00:13:29.516 11:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.516 11:09:40 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:29.516 11:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.516 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:29.516 true 00:13:29.517 11:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.517 11:09:40 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:29.517 11:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.517 11:09:40 -- common/autotest_common.sh@10 -- # set +x 00:13:29.517 true 00:13:29.517 11:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.517 11:09:40 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:32.045 11:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.045 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:32.045 true 00:13:32.045 11:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:32.045 11:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.045 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:32.045 true 00:13:32.045 11:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:32.045 11:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.045 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:32.045 true 00:13:32.045 11:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:32.045 11:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.045 11:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:32.045 true 00:13:32.045 11:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:32.045 11:09:43 -- target/initiator_timeout.sh@54 -- # wait 79454 00:14:28.285 00:14:28.285 job0: (groupid=0, jobs=1): err= 0: pid=79475: Fri Dec 6 11:10:37 2024 00:14:28.285 read: IOPS=795, BW=3183KiB/s (3259kB/s)(187MiB/60001msec) 00:14:28.285 slat (usec): min=10, max=7823, avg=14.07, stdev=45.22 00:14:28.285 clat (usec): min=152, max=40772k, avg=1058.25, stdev=186592.48 00:14:28.285 lat (usec): min=164, max=40772k, avg=1072.32, stdev=186592.48 00:14:28.285 clat percentiles (usec): 00:14:28.285 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:14:28.285 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:14:28.285 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:14:28.285 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 322], 00:14:28.285 | 99.99th=[ 701] 00:14:28.285 write: IOPS=802, BW=3208KiB/s (3285kB/s)(188MiB/60001msec); 0 zone resets 00:14:28.285 slat (usec): min=13, max=436, avg=20.81, stdev= 5.82 00:14:28.285 clat (usec): min=105, max=711, avg=158.78, stdev=20.79 00:14:28.285 lat (usec): min=135, max=761, avg=179.59, stdev=21.76 00:14:28.285 clat percentiles (usec): 00:14:28.286 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 143], 00:14:28.286 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:14:28.286 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:14:28.286 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 243], 99.95th=[ 253], 00:14:28.286 | 99.99th=[ 537] 00:14:28.286 bw ( KiB/s): min= 4096, max=12208, per=100.00%, avg=9916.16, stdev=1497.34, samples=38 00:14:28.286 iops : min= 1024, max= 3052, avg=2479.03, stdev=374.34, samples=38 00:14:28.286 lat (usec) : 250=98.36%, 500=1.62%, 750=0.01%, 1000=0.01% 00:14:28.286 lat (msec) : >=2000=0.01% 00:14:28.286 cpu : usr=0.59%, sys=2.13%, ctx=95884, majf=0, minf=5 00:14:28.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.286 issued rwts: total=47745,48128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.286 00:14:28.286 Run status group 0 (all jobs): 00:14:28.286 READ: bw=3183KiB/s (3259kB/s), 3183KiB/s-3183KiB/s (3259kB/s-3259kB/s), io=187MiB (196MB), run=60001-60001msec 00:14:28.286 WRITE: bw=3208KiB/s (3285kB/s), 3208KiB/s-3208KiB/s (3285kB/s-3285kB/s), io=188MiB (197MB), run=60001-60001msec 00:14:28.286 00:14:28.286 Disk stats (read/write): 00:14:28.286 nvme0n1: ios=47842/47749, merge=0/0, ticks=10156/8162, in_queue=18318, util=99.73% 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.286 11:10:37 -- common/autotest_common.sh@1208 -- # local i=0 00:14:28.286 11:10:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:28.286 11:10:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.286 11:10:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:28.286 11:10:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.286 nvmf hotplug test: fio successful as expected 00:14:28.286 11:10:37 -- common/autotest_common.sh@1220 -- # return 0 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.286 11:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.286 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.286 11:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:28.286 11:10:37 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:28.286 11:10:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.286 11:10:37 -- nvmf/common.sh@116 -- # sync 00:14:28.286 11:10:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:28.286 11:10:37 -- nvmf/common.sh@119 -- # set +e 00:14:28.286 11:10:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.286 11:10:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:28.286 rmmod nvme_tcp 00:14:28.286 rmmod nvme_fabrics 00:14:28.286 rmmod nvme_keyring 00:14:28.286 11:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.286 11:10:37 -- nvmf/common.sh@123 -- # set -e 00:14:28.286 11:10:37 -- nvmf/common.sh@124 -- # return 0 00:14:28.286 11:10:37 -- nvmf/common.sh@477 -- # '[' -n 79390 ']' 00:14:28.286 11:10:37 -- nvmf/common.sh@478 -- # killprocess 79390 00:14:28.286 11:10:37 -- common/autotest_common.sh@936 -- # '[' -z 79390 ']' 00:14:28.286 11:10:37 -- common/autotest_common.sh@940 -- # kill -0 79390 00:14:28.286 11:10:37 -- common/autotest_common.sh@941 -- # uname 00:14:28.286 11:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.286 11:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79390 00:14:28.286 killing process with pid 79390 00:14:28.286 11:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:28.286 11:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:28.286 11:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79390' 00:14:28.286 11:10:37 -- common/autotest_common.sh@955 -- # kill 79390 00:14:28.286 11:10:37 -- common/autotest_common.sh@960 -- # wait 79390 00:14:28.286 11:10:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.286 11:10:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:28.286 11:10:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:28.286 11:10:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.286 11:10:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:28.286 11:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.286 11:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.286 11:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.286 11:10:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:28.286 00:14:28.286 real 1m4.529s 00:14:28.286 user 3m52.769s 00:14:28.286 sys 0m22.140s 00:14:28.286 11:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.286 ************************************ 00:14:28.286 END TEST nvmf_initiator_timeout 00:14:28.286 ************************************ 00:14:28.286 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.286 11:10:37 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:28.286 11:10:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:28.286 11:10:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.286 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.286 11:10:37 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:28.286 11:10:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.286 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.286 11:10:37 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:28.286 11:10:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:28.286 11:10:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.286 11:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.286 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.286 ************************************ 00:14:28.286 START TEST nvmf_identify 00:14:28.286 ************************************ 00:14:28.286 11:10:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:28.286 * Looking for test storage... 00:14:28.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:28.286 11:10:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.286 11:10:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.286 11:10:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.286 11:10:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.286 11:10:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.286 11:10:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.286 11:10:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.286 11:10:38 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.286 11:10:38 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.286 11:10:38 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.286 11:10:38 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.286 11:10:38 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.286 11:10:38 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.286 11:10:38 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.286 11:10:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.286 11:10:38 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.286 11:10:38 -- scripts/common.sh@344 -- # : 1 00:14:28.286 11:10:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.286 11:10:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.286 11:10:38 -- scripts/common.sh@364 -- # decimal 1 00:14:28.286 11:10:38 -- scripts/common.sh@352 -- # local d=1 00:14:28.286 11:10:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.286 11:10:38 -- scripts/common.sh@354 -- # echo 1 00:14:28.286 11:10:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.286 11:10:38 -- scripts/common.sh@365 -- # decimal 2 00:14:28.286 11:10:38 -- scripts/common.sh@352 -- # local d=2 00:14:28.286 11:10:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.286 11:10:38 -- scripts/common.sh@354 -- # echo 2 00:14:28.286 11:10:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.286 11:10:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.286 11:10:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.286 11:10:38 -- scripts/common.sh@367 -- # return 0 00:14:28.286 11:10:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.286 11:10:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.286 --rc genhtml_branch_coverage=1 00:14:28.286 --rc genhtml_function_coverage=1 00:14:28.286 --rc genhtml_legend=1 00:14:28.286 --rc geninfo_all_blocks=1 00:14:28.286 --rc geninfo_unexecuted_blocks=1 00:14:28.286 00:14:28.286 ' 00:14:28.286 11:10:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.286 --rc genhtml_branch_coverage=1 00:14:28.286 --rc genhtml_function_coverage=1 00:14:28.286 --rc genhtml_legend=1 00:14:28.286 --rc geninfo_all_blocks=1 00:14:28.286 --rc geninfo_unexecuted_blocks=1 00:14:28.286 00:14:28.286 ' 00:14:28.286 11:10:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.286 --rc genhtml_branch_coverage=1 00:14:28.286 --rc genhtml_function_coverage=1 00:14:28.286 --rc genhtml_legend=1 00:14:28.286 --rc geninfo_all_blocks=1 00:14:28.286 --rc geninfo_unexecuted_blocks=1 00:14:28.286 00:14:28.286 ' 00:14:28.286 11:10:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.286 --rc genhtml_branch_coverage=1 00:14:28.286 --rc genhtml_function_coverage=1 00:14:28.286 --rc genhtml_legend=1 00:14:28.286 --rc geninfo_all_blocks=1 00:14:28.286 --rc geninfo_unexecuted_blocks=1 00:14:28.286 00:14:28.286 ' 00:14:28.287 11:10:38 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.287 11:10:38 -- nvmf/common.sh@7 -- # uname -s 00:14:28.287 11:10:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.287 11:10:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.287 11:10:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.287 11:10:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.287 11:10:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.287 11:10:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.287 11:10:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.287 11:10:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.287 11:10:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.287 11:10:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:14:28.287 11:10:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:14:28.287 11:10:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.287 11:10:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.287 11:10:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.287 11:10:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.287 11:10:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.287 11:10:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.287 11:10:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.287 11:10:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.287 11:10:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.287 11:10:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.287 11:10:38 -- paths/export.sh@5 -- # export PATH 00:14:28.287 11:10:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.287 11:10:38 -- nvmf/common.sh@46 -- # : 0 00:14:28.287 11:10:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.287 11:10:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.287 11:10:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.287 11:10:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.287 11:10:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.287 11:10:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.287 11:10:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.287 11:10:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.287 11:10:38 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.287 11:10:38 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.287 11:10:38 -- host/identify.sh@14 -- # nvmftestinit 00:14:28.287 11:10:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.287 11:10:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.287 11:10:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.287 11:10:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.287 11:10:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.287 11:10:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.287 11:10:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.287 11:10:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.287 11:10:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:28.287 11:10:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:28.287 11:10:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.287 11:10:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.287 11:10:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:28.287 11:10:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:28.287 11:10:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.287 11:10:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.287 11:10:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.287 11:10:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.287 11:10:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.287 11:10:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.287 11:10:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.287 11:10:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.287 11:10:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:28.287 11:10:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:28.287 Cannot find device "nvmf_tgt_br" 00:14:28.287 11:10:38 -- nvmf/common.sh@154 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.287 Cannot find device "nvmf_tgt_br2" 00:14:28.287 11:10:38 -- nvmf/common.sh@155 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:28.287 11:10:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:28.287 Cannot find device "nvmf_tgt_br" 00:14:28.287 11:10:38 -- nvmf/common.sh@157 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:28.287 Cannot find device "nvmf_tgt_br2" 00:14:28.287 11:10:38 -- nvmf/common.sh@158 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:28.287 11:10:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:28.287 11:10:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.287 11:10:38 -- nvmf/common.sh@161 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.287 11:10:38 -- nvmf/common.sh@162 -- # true 00:14:28.287 11:10:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.287 11:10:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.287 11:10:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.287 11:10:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.287 11:10:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.287 11:10:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.287 11:10:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.287 11:10:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.287 11:10:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.287 11:10:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:28.287 11:10:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:28.287 11:10:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:28.287 11:10:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:28.287 11:10:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.287 11:10:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.287 11:10:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.287 11:10:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:28.287 11:10:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:28.287 11:10:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.287 11:10:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.287 11:10:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.287 11:10:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.287 11:10:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.287 11:10:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:28.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:28.287 00:14:28.287 --- 10.0.0.2 ping statistics --- 00:14:28.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.287 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:28.287 11:10:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:28.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:28.287 00:14:28.287 --- 10.0.0.3 ping statistics --- 00:14:28.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.287 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:28.287 11:10:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:28.287 00:14:28.287 --- 10.0.0.1 ping statistics --- 00:14:28.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.288 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:28.288 11:10:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.288 11:10:38 -- nvmf/common.sh@421 -- # return 0 00:14:28.288 11:10:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:28.288 11:10:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.288 11:10:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:28.288 11:10:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:28.288 11:10:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.288 11:10:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:28.288 11:10:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:28.288 11:10:38 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:28.288 11:10:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.288 11:10:38 -- common/autotest_common.sh@10 -- # set +x 00:14:28.288 11:10:38 -- host/identify.sh@19 -- # nvmfpid=80328 00:14:28.288 11:10:38 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.288 11:10:38 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:28.288 11:10:38 -- host/identify.sh@23 -- # waitforlisten 80328 00:14:28.288 11:10:38 -- common/autotest_common.sh@829 -- # '[' -z 80328 ']' 00:14:28.288 11:10:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.288 11:10:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.288 11:10:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.288 11:10:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.288 11:10:38 -- common/autotest_common.sh@10 -- # set +x 00:14:28.288 [2024-12-06 11:10:38.453194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.288 [2024-12-06 11:10:38.453279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.288 [2024-12-06 11:10:38.594214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.288 [2024-12-06 11:10:38.632638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.288 [2024-12-06 11:10:38.632780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.288 [2024-12-06 11:10:38.632794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.288 [2024-12-06 11:10:38.632803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.288 [2024-12-06 11:10:38.633001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.288 [2024-12-06 11:10:38.633110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.288 [2024-12-06 11:10:38.634152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.288 [2024-12-06 11:10:38.634216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.288 11:10:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.288 11:10:39 -- common/autotest_common.sh@862 -- # return 0 00:14:28.288 11:10:39 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.288 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.288 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.288 [2024-12-06 11:10:39.423601] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:28.548 11:10:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 11:10:39 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 Malloc0 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 [2024-12-06 11:10:39.525178] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:28.548 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.548 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:28.548 [2024-12-06 11:10:39.540928] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:28.548 [ 00:14:28.548 { 00:14:28.548 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.548 "subtype": "Discovery", 00:14:28.548 "listen_addresses": [ 00:14:28.548 { 00:14:28.548 "transport": "TCP", 00:14:28.548 "trtype": "TCP", 00:14:28.548 "adrfam": "IPv4", 00:14:28.548 "traddr": "10.0.0.2", 00:14:28.548 "trsvcid": "4420" 00:14:28.548 } 00:14:28.548 ], 00:14:28.548 "allow_any_host": true, 00:14:28.548 "hosts": [] 00:14:28.548 }, 00:14:28.548 { 00:14:28.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.548 "subtype": "NVMe", 00:14:28.548 "listen_addresses": [ 00:14:28.548 { 00:14:28.548 "transport": "TCP", 00:14:28.548 "trtype": "TCP", 00:14:28.548 "adrfam": "IPv4", 00:14:28.548 "traddr": "10.0.0.2", 00:14:28.548 "trsvcid": "4420" 00:14:28.548 } 00:14:28.548 ], 00:14:28.548 "allow_any_host": true, 00:14:28.548 "hosts": [], 00:14:28.548 "serial_number": "SPDK00000000000001", 00:14:28.548 "model_number": "SPDK bdev Controller", 00:14:28.548 "max_namespaces": 32, 00:14:28.548 "min_cntlid": 1, 00:14:28.548 "max_cntlid": 65519, 00:14:28.548 "namespaces": [ 00:14:28.548 { 00:14:28.548 "nsid": 1, 00:14:28.548 "bdev_name": "Malloc0", 00:14:28.548 "name": "Malloc0", 00:14:28.548 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:28.548 "eui64": "ABCDEF0123456789", 00:14:28.548 "uuid": "2ded56a9-a535-4bfd-a56b-1c866c8409db" 00:14:28.548 } 00:14:28.548 ] 00:14:28.548 } 00:14:28.548 ] 00:14:28.548 11:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.548 11:10:39 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:28.548 [2024-12-06 11:10:39.578140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.548 [2024-12-06 11:10:39.578195] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80363 ] 00:14:28.814 [2024-12-06 11:10:39.716185] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:28.814 [2024-12-06 11:10:39.716264] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:28.814 [2024-12-06 11:10:39.716272] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:28.814 [2024-12-06 11:10:39.716284] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:28.814 [2024-12-06 11:10:39.716297] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:28.814 [2024-12-06 11:10:39.716474] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:28.814 [2024-12-06 11:10:39.716607] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2150510 0 00:14:28.814 [2024-12-06 11:10:39.723629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:28.814 [2024-12-06 11:10:39.723665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:28.814 [2024-12-06 11:10:39.723688] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:28.814 [2024-12-06 11:10:39.723692] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:28.814 [2024-12-06 11:10:39.723736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.723744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.723748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.814 [2024-12-06 11:10:39.723762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:28.814 [2024-12-06 11:10:39.723794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.814 [2024-12-06 11:10:39.731615] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.814 [2024-12-06 11:10:39.731636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.814 [2024-12-06 11:10:39.731657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.814 [2024-12-06 11:10:39.731675] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:28.814 [2024-12-06 11:10:39.731683] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:28.814 [2024-12-06 11:10:39.731689] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:28.814 [2024-12-06 11:10:39.731718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.814 [2024-12-06 11:10:39.731740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.814 [2024-12-06 11:10:39.731769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.814 [2024-12-06 11:10:39.731837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.814 [2024-12-06 11:10:39.731859] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.814 [2024-12-06 11:10:39.731863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.814 [2024-12-06 11:10:39.731917] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:28.814 [2024-12-06 11:10:39.731933] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:28.814 [2024-12-06 11:10:39.731943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.814 [2024-12-06 11:10:39.731952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.814 [2024-12-06 11:10:39.731960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.814 [2024-12-06 11:10:39.731985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732063] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:28.815 [2024-12-06 11:10:39.732073] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.815 [2024-12-06 11:10:39.732115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732180] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.815 [2024-12-06 11:10:39.732232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732328] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732334] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:28.815 [2024-12-06 11:10:39.732340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732348] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732454] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:28.815 [2024-12-06 11:10:39.732459] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.815 [2024-12-06 11:10:39.732501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732581] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:28.815 [2024-12-06 11:10:39.732594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.815 [2024-12-06 11:10:39.732628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732689] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732695] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:28.815 [2024-12-06 11:10:39.732700] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:28.815 [2024-12-06 11:10:39.732708] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:28.815 [2024-12-06 11:10:39.732725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:28.815 [2024-12-06 11:10:39.732736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.815 [2024-12-06 11:10:39.732771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.732853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.815 [2024-12-06 11:10:39.732861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.815 [2024-12-06 11:10:39.732865] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732869] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2150510): datao=0, datal=4096, cccid=0 00:14:28.815 [2024-12-06 11:10:39.732874] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x219c8a0) on tqpair(0x2150510): expected_datao=0, payload_size=4096 00:14:28.815 [2024-12-06 11:10:39.732883] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732888] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.732903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.732906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.732920] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:28.815 [2024-12-06 11:10:39.732926] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:28.815 [2024-12-06 11:10:39.732931] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:28.815 [2024-12-06 11:10:39.732936] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:28.815 [2024-12-06 11:10:39.732941] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:28.815 [2024-12-06 11:10:39.732946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:28.815 [2024-12-06 11:10:39.732959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:28.815 [2024-12-06 11:10:39.732968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.732976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.732984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.815 [2024-12-06 11:10:39.733002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.815 [2024-12-06 11:10:39.733055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.815 [2024-12-06 11:10:39.733062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.815 [2024-12-06 11:10:39.733066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219c8a0) on tqpair=0x2150510 00:14:28.815 [2024-12-06 11:10:39.733079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.733094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.815 [2024-12-06 11:10:39.733100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.733114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.815 [2024-12-06 11:10:39.733120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2150510) 00:14:28.815 [2024-12-06 11:10:39.733134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.815 [2024-12-06 11:10:39.733140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.815 [2024-12-06 11:10:39.733143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.816 [2024-12-06 11:10:39.733158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:28.816 [2024-12-06 11:10:39.733171] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:28.816 [2024-12-06 11:10:39.733179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.816 [2024-12-06 11:10:39.733213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219c8a0, cid 0, qid 0 00:14:28.816 [2024-12-06 11:10:39.733220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ca00, cid 1, qid 0 00:14:28.816 [2024-12-06 11:10:39.733225] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219cb60, cid 2, qid 0 00:14:28.816 [2024-12-06 11:10:39.733230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.816 [2024-12-06 11:10:39.733235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ce20, cid 4, qid 0 00:14:28.816 [2024-12-06 11:10:39.733324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.733331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.733334] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ce20) on tqpair=0x2150510 00:14:28.816 [2024-12-06 11:10:39.733345] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:28.816 [2024-12-06 11:10:39.733350] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:28.816 [2024-12-06 11:10:39.733361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.816 [2024-12-06 11:10:39.733393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ce20, cid 4, qid 0 00:14:28.816 [2024-12-06 11:10:39.733449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.816 [2024-12-06 11:10:39.733456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.816 [2024-12-06 11:10:39.733460] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733464] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2150510): datao=0, datal=4096, cccid=4 00:14:28.816 [2024-12-06 11:10:39.733468] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x219ce20) on tqpair(0x2150510): expected_datao=0, payload_size=4096 00:14:28.816 [2024-12-06 11:10:39.733476] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733480] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.733495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.733498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ce20) on tqpair=0x2150510 00:14:28.816 [2024-12-06 11:10:39.733516] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:28.816 [2024-12-06 11:10:39.733568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.816 [2024-12-06 11:10:39.733596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.816 [2024-12-06 11:10:39.733636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ce20, cid 4, qid 0 00:14:28.816 [2024-12-06 11:10:39.733644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219cf80, cid 5, qid 0 00:14:28.816 [2024-12-06 11:10:39.733752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.816 [2024-12-06 11:10:39.733759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.816 [2024-12-06 11:10:39.733763] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733767] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2150510): datao=0, datal=1024, cccid=4 00:14:28.816 [2024-12-06 11:10:39.733772] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x219ce20) on tqpair(0x2150510): expected_datao=0, payload_size=1024 00:14:28.816 [2024-12-06 11:10:39.733779] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733783] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.733796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.733799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733804] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219cf80) on tqpair=0x2150510 00:14:28.816 [2024-12-06 11:10:39.733822] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.733830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.733834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733838] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ce20) on tqpair=0x2150510 00:14:28.816 [2024-12-06 11:10:39.733850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.733867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.816 [2024-12-06 11:10:39.733890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ce20, cid 4, qid 0 00:14:28.816 [2024-12-06 11:10:39.733972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.816 [2024-12-06 11:10:39.733978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.816 [2024-12-06 11:10:39.733982] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.733986] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2150510): datao=0, datal=3072, cccid=4 00:14:28.816 [2024-12-06 11:10:39.733991] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x219ce20) on tqpair(0x2150510): expected_datao=0, payload_size=3072 00:14:28.816 [2024-12-06 11:10:39.733998] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734002] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.734017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.734020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734024] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ce20) on tqpair=0x2150510 00:14:28.816 [2024-12-06 11:10:39.734034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2150510) 00:14:28.816 [2024-12-06 11:10:39.734050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.816 [2024-12-06 11:10:39.734071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ce20, cid 4, qid 0 00:14:28.816 [2024-12-06 11:10:39.734131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.816 [2024-12-06 11:10:39.734138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.816 [2024-12-06 11:10:39.734141] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734145] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2150510): datao=0, datal=8, cccid=4 00:14:28.816 [2024-12-06 11:10:39.734150] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x219ce20) on tqpair(0x2150510): expected_datao=0, payload_size=8 00:14:28.816 [2024-12-06 11:10:39.734157] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734161] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.816 [2024-12-06 11:10:39.734182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.816 [2024-12-06 11:10:39.734186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.816 [2024-12-06 11:10:39.734190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ce20) on tqpair=0x2150510 00:14:28.816 ===================================================== 00:14:28.816 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:28.816 ===================================================== 00:14:28.816 Controller Capabilities/Features 00:14:28.816 ================================ 00:14:28.816 Vendor ID: 0000 00:14:28.816 Subsystem Vendor ID: 0000 00:14:28.816 Serial Number: .................... 00:14:28.816 Model Number: ........................................ 00:14:28.816 Firmware Version: 24.01.1 00:14:28.816 Recommended Arb Burst: 0 00:14:28.816 IEEE OUI Identifier: 00 00 00 00:14:28.816 Multi-path I/O 00:14:28.816 May have multiple subsystem ports: No 00:14:28.816 May have multiple controllers: No 00:14:28.816 Associated with SR-IOV VF: No 00:14:28.816 Max Data Transfer Size: 131072 00:14:28.817 Max Number of Namespaces: 0 00:14:28.817 Max Number of I/O Queues: 1024 00:14:28.817 NVMe Specification Version (VS): 1.3 00:14:28.817 NVMe Specification Version (Identify): 1.3 00:14:28.817 Maximum Queue Entries: 128 00:14:28.817 Contiguous Queues Required: Yes 00:14:28.817 Arbitration Mechanisms Supported 00:14:28.817 Weighted Round Robin: Not Supported 00:14:28.817 Vendor Specific: Not Supported 00:14:28.817 Reset Timeout: 15000 ms 00:14:28.817 Doorbell Stride: 4 bytes 00:14:28.817 NVM Subsystem Reset: Not Supported 00:14:28.817 Command Sets Supported 00:14:28.817 NVM Command Set: Supported 00:14:28.817 Boot Partition: Not Supported 00:14:28.817 Memory Page Size Minimum: 4096 bytes 00:14:28.817 Memory Page Size Maximum: 4096 bytes 00:14:28.817 Persistent Memory Region: Not Supported 00:14:28.817 Optional Asynchronous Events Supported 00:14:28.817 Namespace Attribute Notices: Not Supported 00:14:28.817 Firmware Activation Notices: Not Supported 00:14:28.817 ANA Change Notices: Not Supported 00:14:28.817 PLE Aggregate Log Change Notices: Not Supported 00:14:28.817 LBA Status Info Alert Notices: Not Supported 00:14:28.817 EGE Aggregate Log Change Notices: Not Supported 00:14:28.817 Normal NVM Subsystem Shutdown event: Not Supported 00:14:28.817 Zone Descriptor Change Notices: Not Supported 00:14:28.817 Discovery Log Change Notices: Supported 00:14:28.817 Controller Attributes 00:14:28.817 128-bit Host Identifier: Not Supported 00:14:28.817 Non-Operational Permissive Mode: Not Supported 00:14:28.817 NVM Sets: Not Supported 00:14:28.817 Read Recovery Levels: Not Supported 00:14:28.817 Endurance Groups: Not Supported 00:14:28.817 Predictable Latency Mode: Not Supported 00:14:28.817 Traffic Based Keep ALive: Not Supported 00:14:28.817 Namespace Granularity: Not Supported 00:14:28.817 SQ Associations: Not Supported 00:14:28.817 UUID List: Not Supported 00:14:28.817 Multi-Domain Subsystem: Not Supported 00:14:28.817 Fixed Capacity Management: Not Supported 00:14:28.817 Variable Capacity Management: Not Supported 00:14:28.817 Delete Endurance Group: Not Supported 00:14:28.817 Delete NVM Set: Not Supported 00:14:28.817 Extended LBA Formats Supported: Not Supported 00:14:28.817 Flexible Data Placement Supported: Not Supported 00:14:28.817 00:14:28.817 Controller Memory Buffer Support 00:14:28.817 ================================ 00:14:28.817 Supported: No 00:14:28.817 00:14:28.817 Persistent Memory Region Support 00:14:28.817 ================================ 00:14:28.817 Supported: No 00:14:28.817 00:14:28.817 Admin Command Set Attributes 00:14:28.817 ============================ 00:14:28.817 Security Send/Receive: Not Supported 00:14:28.817 Format NVM: Not Supported 00:14:28.817 Firmware Activate/Download: Not Supported 00:14:28.817 Namespace Management: Not Supported 00:14:28.817 Device Self-Test: Not Supported 00:14:28.817 Directives: Not Supported 00:14:28.817 NVMe-MI: Not Supported 00:14:28.817 Virtualization Management: Not Supported 00:14:28.817 Doorbell Buffer Config: Not Supported 00:14:28.817 Get LBA Status Capability: Not Supported 00:14:28.817 Command & Feature Lockdown Capability: Not Supported 00:14:28.817 Abort Command Limit: 1 00:14:28.817 Async Event Request Limit: 4 00:14:28.817 Number of Firmware Slots: N/A 00:14:28.817 Firmware Slot 1 Read-Only: N/A 00:14:28.817 Firmware Activation Without Reset: N/A 00:14:28.817 Multiple Update Detection Support: N/A 00:14:28.817 Firmware Update Granularity: No Information Provided 00:14:28.817 Per-Namespace SMART Log: No 00:14:28.817 Asymmetric Namespace Access Log Page: Not Supported 00:14:28.817 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:28.817 Command Effects Log Page: Not Supported 00:14:28.817 Get Log Page Extended Data: Supported 00:14:28.817 Telemetry Log Pages: Not Supported 00:14:28.817 Persistent Event Log Pages: Not Supported 00:14:28.817 Supported Log Pages Log Page: May Support 00:14:28.817 Commands Supported & Effects Log Page: Not Supported 00:14:28.817 Feature Identifiers & Effects Log Page:May Support 00:14:28.817 NVMe-MI Commands & Effects Log Page: May Support 00:14:28.817 Data Area 4 for Telemetry Log: Not Supported 00:14:28.817 Error Log Page Entries Supported: 128 00:14:28.817 Keep Alive: Not Supported 00:14:28.817 00:14:28.817 NVM Command Set Attributes 00:14:28.817 ========================== 00:14:28.817 Submission Queue Entry Size 00:14:28.817 Max: 1 00:14:28.817 Min: 1 00:14:28.817 Completion Queue Entry Size 00:14:28.817 Max: 1 00:14:28.817 Min: 1 00:14:28.817 Number of Namespaces: 0 00:14:28.817 Compare Command: Not Supported 00:14:28.817 Write Uncorrectable Command: Not Supported 00:14:28.817 Dataset Management Command: Not Supported 00:14:28.817 Write Zeroes Command: Not Supported 00:14:28.817 Set Features Save Field: Not Supported 00:14:28.817 Reservations: Not Supported 00:14:28.817 Timestamp: Not Supported 00:14:28.817 Copy: Not Supported 00:14:28.817 Volatile Write Cache: Not Present 00:14:28.817 Atomic Write Unit (Normal): 1 00:14:28.817 Atomic Write Unit (PFail): 1 00:14:28.817 Atomic Compare & Write Unit: 1 00:14:28.817 Fused Compare & Write: Supported 00:14:28.817 Scatter-Gather List 00:14:28.817 SGL Command Set: Supported 00:14:28.817 SGL Keyed: Supported 00:14:28.817 SGL Bit Bucket Descriptor: Not Supported 00:14:28.817 SGL Metadata Pointer: Not Supported 00:14:28.817 Oversized SGL: Not Supported 00:14:28.817 SGL Metadata Address: Not Supported 00:14:28.817 SGL Offset: Supported 00:14:28.817 Transport SGL Data Block: Not Supported 00:14:28.817 Replay Protected Memory Block: Not Supported 00:14:28.817 00:14:28.817 Firmware Slot Information 00:14:28.817 ========================= 00:14:28.817 Active slot: 0 00:14:28.817 00:14:28.817 00:14:28.817 Error Log 00:14:28.817 ========= 00:14:28.817 00:14:28.817 Active Namespaces 00:14:28.817 ================= 00:14:28.817 Discovery Log Page 00:14:28.817 ================== 00:14:28.817 Generation Counter: 2 00:14:28.817 Number of Records: 2 00:14:28.817 Record Format: 0 00:14:28.817 00:14:28.817 Discovery Log Entry 0 00:14:28.817 ---------------------- 00:14:28.817 Transport Type: 3 (TCP) 00:14:28.817 Address Family: 1 (IPv4) 00:14:28.817 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:28.817 Entry Flags: 00:14:28.817 Duplicate Returned Information: 1 00:14:28.817 Explicit Persistent Connection Support for Discovery: 1 00:14:28.817 Transport Requirements: 00:14:28.817 Secure Channel: Not Required 00:14:28.817 Port ID: 0 (0x0000) 00:14:28.817 Controller ID: 65535 (0xffff) 00:14:28.817 Admin Max SQ Size: 128 00:14:28.817 Transport Service Identifier: 4420 00:14:28.817 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:28.817 Transport Address: 10.0.0.2 00:14:28.817 Discovery Log Entry 1 00:14:28.817 ---------------------- 00:14:28.817 Transport Type: 3 (TCP) 00:14:28.817 Address Family: 1 (IPv4) 00:14:28.817 Subsystem Type: 2 (NVM Subsystem) 00:14:28.817 Entry Flags: 00:14:28.817 Duplicate Returned Information: 0 00:14:28.817 Explicit Persistent Connection Support for Discovery: 0 00:14:28.817 Transport Requirements: 00:14:28.817 Secure Channel: Not Required 00:14:28.817 Port ID: 0 (0x0000) 00:14:28.817 Controller ID: 65535 (0xffff) 00:14:28.817 Admin Max SQ Size: 128 00:14:28.817 Transport Service Identifier: 4420 00:14:28.817 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:28.817 Transport Address: 10.0.0.2 [2024-12-06 11:10:39.734279] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:28.817 [2024-12-06 11:10:39.734294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.817 [2024-12-06 11:10:39.734302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.817 [2024-12-06 11:10:39.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.817 [2024-12-06 11:10:39.734314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.817 [2024-12-06 11:10:39.734323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.817 [2024-12-06 11:10:39.734328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.817 [2024-12-06 11:10:39.734331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.817 [2024-12-06 11:10:39.734339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.817 [2024-12-06 11:10:39.734361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.817 [2024-12-06 11:10:39.734410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.817 [2024-12-06 11:10:39.734417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.817 [2024-12-06 11:10:39.734420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.734449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.734469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.734533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.734539] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.734543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734553] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:28.818 [2024-12-06 11:10:39.734558] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:28.818 [2024-12-06 11:10:39.734581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.734599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.734617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.734664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.734671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.734674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.734706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.734722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.734764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.734771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.734775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734798] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.734805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.734821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.734866] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.734873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.734876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734896] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.734907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.734922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.734965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.734971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.734975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.734990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.734999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.735006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.735022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.735065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.735071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.735075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.735090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.735106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.735121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.735204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.735212] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.735216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.735233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.735250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.735268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.735320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.735327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.735331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735335] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.735347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.735364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.735387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.735436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.735443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.735447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.735463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.735472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.735480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.735512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.739623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.739645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.739650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.739655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.739670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.739676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.739679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2150510) 00:14:28.818 [2024-12-06 11:10:39.739688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.818 [2024-12-06 11:10:39.739712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x219ccc0, cid 3, qid 0 00:14:28.818 [2024-12-06 11:10:39.739772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.818 [2024-12-06 11:10:39.739795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.818 [2024-12-06 11:10:39.739799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.818 [2024-12-06 11:10:39.739803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x219ccc0) on tqpair=0x2150510 00:14:28.818 [2024-12-06 11:10:39.739813] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:28.818 00:14:28.819 11:10:39 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:28.819 [2024-12-06 11:10:39.774413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.819 [2024-12-06 11:10:39.774453] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80371 ] 00:14:28.819 [2024-12-06 11:10:39.909693] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:28.819 [2024-12-06 11:10:39.909761] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:28.819 [2024-12-06 11:10:39.909768] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:28.819 [2024-12-06 11:10:39.909777] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:28.819 [2024-12-06 11:10:39.909786] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:28.819 [2024-12-06 11:10:39.909898] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:28.819 [2024-12-06 11:10:39.909960] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x892510 0 00:14:28.819 [2024-12-06 11:10:39.922558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:28.819 [2024-12-06 11:10:39.922580] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:28.819 [2024-12-06 11:10:39.922603] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:28.819 [2024-12-06 11:10:39.922607] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:28.819 [2024-12-06 11:10:39.922646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.922653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.922657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.922668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:28.819 [2024-12-06 11:10:39.922697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.929586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.929606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.929628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.929643] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:28.819 [2024-12-06 11:10:39.929661] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:28.819 [2024-12-06 11:10:39.929669] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:28.819 [2024-12-06 11:10:39.929684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.929703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.819 [2024-12-06 11:10:39.929732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.929818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.929825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.929829] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.929839] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:28.819 [2024-12-06 11:10:39.929848] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:28.819 [2024-12-06 11:10:39.929855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.929880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.819 [2024-12-06 11:10:39.929899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.929965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.929972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.929976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.929980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.929986] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:28.819 [2024-12-06 11:10:39.929995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.930018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.819 [2024-12-06 11:10:39.930035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.930076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.930083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.930087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.930097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.930123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.819 [2024-12-06 11:10:39.930140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.930187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.930194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.930197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.930206] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:28.819 [2024-12-06 11:10:39.930211] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930325] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:28.819 [2024-12-06 11:10:39.930335] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930353] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.819 [2024-12-06 11:10:39.930360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.819 [2024-12-06 11:10:39.930380] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.819 [2024-12-06 11:10:39.930422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.819 [2024-12-06 11:10:39.930428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.819 [2024-12-06 11:10:39.930432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.819 [2024-12-06 11:10:39.930442] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:28.819 [2024-12-06 11:10:39.930452] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.819 [2024-12-06 11:10:39.930457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.930468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.820 [2024-12-06 11:10:39.930485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.820 [2024-12-06 11:10:39.930553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.820 [2024-12-06 11:10:39.930562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.820 [2024-12-06 11:10:39.930565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.820 [2024-12-06 11:10:39.930575] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:28.820 [2024-12-06 11:10:39.930580] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.930588] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:28.820 [2024-12-06 11:10:39.930603] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.930613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.930629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.820 [2024-12-06 11:10:39.930650] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.820 [2024-12-06 11:10:39.930733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.820 [2024-12-06 11:10:39.930740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.820 [2024-12-06 11:10:39.930744] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930748] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=4096, cccid=0 00:14:28.820 [2024-12-06 11:10:39.930753] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8de8a0) on tqpair(0x892510): expected_datao=0, payload_size=4096 00:14:28.820 [2024-12-06 11:10:39.930761] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930766] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.820 [2024-12-06 11:10:39.930780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.820 [2024-12-06 11:10:39.930784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.820 [2024-12-06 11:10:39.930796] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:28.820 [2024-12-06 11:10:39.930801] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:28.820 [2024-12-06 11:10:39.930806] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:28.820 [2024-12-06 11:10:39.930810] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:28.820 [2024-12-06 11:10:39.930815] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:28.820 [2024-12-06 11:10:39.930820] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.930833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.930841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930850] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.930858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.820 [2024-12-06 11:10:39.930878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.820 [2024-12-06 11:10:39.930925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.820 [2024-12-06 11:10:39.930932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.820 [2024-12-06 11:10:39.930935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8de8a0) on tqpair=0x892510 00:14:28.820 [2024-12-06 11:10:39.930947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.930962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.820 [2024-12-06 11:10:39.930968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.930982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.820 [2024-12-06 11:10:39.930988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.930995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.931001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.820 [2024-12-06 11:10:39.931008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931015] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.931022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.820 [2024-12-06 11:10:39.931027] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.931062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.820 [2024-12-06 11:10:39.931082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8de8a0, cid 0, qid 0 00:14:28.820 [2024-12-06 11:10:39.931090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dea00, cid 1, qid 0 00:14:28.820 [2024-12-06 11:10:39.931095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8deb60, cid 2, qid 0 00:14:28.820 [2024-12-06 11:10:39.931100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.820 [2024-12-06 11:10:39.931104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.820 [2024-12-06 11:10:39.931219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.820 [2024-12-06 11:10:39.931236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.820 [2024-12-06 11:10:39.931242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.820 [2024-12-06 11:10:39.931252] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:28.820 [2024-12-06 11:10:39.931258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931268] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.931305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.820 [2024-12-06 11:10:39.931327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.820 [2024-12-06 11:10:39.931380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.820 [2024-12-06 11:10:39.931388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.820 [2024-12-06 11:10:39.931392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.820 [2024-12-06 11:10:39.931460] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931480] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:28.820 [2024-12-06 11:10:39.931530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.820 [2024-12-06 11:10:39.931597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.820 [2024-12-06 11:10:39.931624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.820 [2024-12-06 11:10:39.931696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.820 [2024-12-06 11:10:39.931704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.820 [2024-12-06 11:10:39.931708] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.820 [2024-12-06 11:10:39.931712] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=4096, cccid=4 00:14:28.821 [2024-12-06 11:10:39.931717] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dee20) on tqpair(0x892510): expected_datao=0, payload_size=4096 00:14:28.821 [2024-12-06 11:10:39.931726] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931731] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.931746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.931750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.931773] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:28.821 [2024-12-06 11:10:39.931784] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.931795] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.931803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.931820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.931841] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.821 [2024-12-06 11:10:39.931918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.821 [2024-12-06 11:10:39.931925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.821 [2024-12-06 11:10:39.931929] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931948] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=4096, cccid=4 00:14:28.821 [2024-12-06 11:10:39.931953] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dee20) on tqpair(0x892510): expected_datao=0, payload_size=4096 00:14:28.821 [2024-12-06 11:10:39.931961] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931965] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.931980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.931984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.931988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932004] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932015] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.821 [2024-12-06 11:10:39.932114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.821 [2024-12-06 11:10:39.932121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.821 [2024-12-06 11:10:39.932125] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932130] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=4096, cccid=4 00:14:28.821 [2024-12-06 11:10:39.932134] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dee20) on tqpair(0x892510): expected_datao=0, payload_size=4096 00:14:28.821 [2024-12-06 11:10:39.932142] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932146] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932211] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932216] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:28.821 [2024-12-06 11:10:39.932221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:28.821 [2024-12-06 11:10:39.932239] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:28.821 [2024-12-06 11:10:39.932261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.821 [2024-12-06 11:10:39.932346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.821 [2024-12-06 11:10:39.932353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8def80, cid 5, qid 0 00:14:28.821 [2024-12-06 11:10:39.932421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932456] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8def80) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8def80, cid 5, qid 0 00:14:28.821 [2024-12-06 11:10:39.932545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8def80) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8def80, cid 5, qid 0 00:14:28.821 [2024-12-06 11:10:39.932670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8def80) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8def80, cid 5, qid 0 00:14:28.821 [2024-12-06 11:10:39.932784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.821 [2024-12-06 11:10:39.932795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.821 [2024-12-06 11:10:39.932799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8def80) on tqpair=0x892510 00:14:28.821 [2024-12-06 11:10:39.932817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.821 [2024-12-06 11:10:39.932826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x892510) 00:14:28.821 [2024-12-06 11:10:39.932834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.821 [2024-12-06 11:10:39.932841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x892510) 00:14:28.822 [2024-12-06 11:10:39.932855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.822 [2024-12-06 11:10:39.932862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932870] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x892510) 00:14:28.822 [2024-12-06 11:10:39.932877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.822 [2024-12-06 11:10:39.932884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.932892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x892510) 00:14:28.822 [2024-12-06 11:10:39.932898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.822 [2024-12-06 11:10:39.932917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8def80, cid 5, qid 0 00:14:28.822 [2024-12-06 11:10:39.932924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dee20, cid 4, qid 0 00:14:28.822 [2024-12-06 11:10:39.932929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8df0e0, cid 6, qid 0 00:14:28.822 [2024-12-06 11:10:39.932934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8df240, cid 7, qid 0 00:14:28.822 [2024-12-06 11:10:39.933061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.822 [2024-12-06 11:10:39.933072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.822 [2024-12-06 11:10:39.933076] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933080] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=8192, cccid=5 00:14:28.822 [2024-12-06 11:10:39.933085] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8def80) on tqpair(0x892510): expected_datao=0, payload_size=8192 00:14:28.822 [2024-12-06 11:10:39.933103] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933108] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.822 [2024-12-06 11:10:39.933120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.822 [2024-12-06 11:10:39.933124] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933128] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=512, cccid=4 00:14:28.822 [2024-12-06 11:10:39.933133] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dee20) on tqpair(0x892510): expected_datao=0, payload_size=512 00:14:28.822 [2024-12-06 11:10:39.933140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933143] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.822 [2024-12-06 11:10:39.933155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.822 [2024-12-06 11:10:39.933158] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933162] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=512, cccid=6 00:14:28.822 [2024-12-06 11:10:39.933167] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8df0e0) on tqpair(0x892510): expected_datao=0, payload_size=512 00:14:28.822 [2024-12-06 11:10:39.933173] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933177] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:28.822 [2024-12-06 11:10:39.933188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:28.822 [2024-12-06 11:10:39.933192] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933196] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x892510): datao=0, datal=4096, cccid=7 00:14:28.822 [2024-12-06 11:10:39.933200] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8df240) on tqpair(0x892510): expected_datao=0, payload_size=4096 00:14:28.822 [2024-12-06 11:10:39.933207] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933211] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.822 [2024-12-06 11:10:39.933223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.822 [2024-12-06 11:10:39.933226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8def80) on tqpair=0x892510 00:14:28.822 ===================================================== 00:14:28.822 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.822 ===================================================== 00:14:28.822 Controller Capabilities/Features 00:14:28.822 ================================ 00:14:28.822 Vendor ID: 8086 00:14:28.822 Subsystem Vendor ID: 8086 00:14:28.822 Serial Number: SPDK00000000000001 00:14:28.822 Model Number: SPDK bdev Controller 00:14:28.822 Firmware Version: 24.01.1 00:14:28.822 Recommended Arb Burst: 6 00:14:28.822 IEEE OUI Identifier: e4 d2 5c 00:14:28.822 Multi-path I/O 00:14:28.822 May have multiple subsystem ports: Yes 00:14:28.822 May have multiple controllers: Yes 00:14:28.822 Associated with SR-IOV VF: No 00:14:28.822 Max Data Transfer Size: 131072 00:14:28.822 Max Number of Namespaces: 32 00:14:28.822 Max Number of I/O Queues: 127 00:14:28.822 NVMe Specification Version (VS): 1.3 00:14:28.822 NVMe Specification Version (Identify): 1.3 00:14:28.822 Maximum Queue Entries: 128 00:14:28.822 Contiguous Queues Required: Yes 00:14:28.822 Arbitration Mechanisms Supported 00:14:28.822 Weighted Round Robin: Not Supported 00:14:28.822 Vendor Specific: Not Supported 00:14:28.822 Reset Timeout: 15000 ms 00:14:28.822 Doorbell Stride: 4 bytes 00:14:28.822 NVM Subsystem Reset: Not Supported 00:14:28.822 Command Sets Supported 00:14:28.822 NVM Command Set: Supported 00:14:28.822 Boot Partition: Not Supported 00:14:28.822 Memory Page Size Minimum: 4096 bytes 00:14:28.822 Memory Page Size Maximum: 4096 bytes 00:14:28.822 Persistent Memory Region: Not Supported 00:14:28.822 Optional Asynchronous Events Supported 00:14:28.822 Namespace Attribute Notices: Supported 00:14:28.822 Firmware Activation Notices: Not Supported 00:14:28.822 ANA Change Notices: Not Supported 00:14:28.822 PLE Aggregate Log Change Notices: Not Supported 00:14:28.822 LBA Status Info Alert Notices: Not Supported 00:14:28.822 EGE Aggregate Log Change Notices: Not Supported 00:14:28.822 Normal NVM Subsystem Shutdown event: Not Supported 00:14:28.822 Zone Descriptor Change Notices: Not Supported 00:14:28.822 Discovery Log Change Notices: Not Supported 00:14:28.822 Controller Attributes 00:14:28.822 128-bit Host Identifier: Supported 00:14:28.822 Non-Operational Permissive Mode: Not Supported 00:14:28.822 NVM Sets: Not Supported 00:14:28.822 Read Recovery Levels: Not Supported 00:14:28.822 Endurance Groups: Not Supported 00:14:28.822 Predictable Latency Mode: Not Supported 00:14:28.822 Traffic Based Keep ALive: Not Supported 00:14:28.822 Namespace Granularity: Not Supported 00:14:28.822 SQ Associations: Not Supported 00:14:28.822 UUID List: Not Supported 00:14:28.822 Multi-Domain Subsystem: Not Supported 00:14:28.822 Fixed Capacity Management: Not Supported 00:14:28.822 Variable Capacity Management: Not Supported 00:14:28.822 Delete Endurance Group: Not Supported 00:14:28.822 Delete NVM Set: Not Supported 00:14:28.822 Extended LBA Formats Supported: Not Supported 00:14:28.822 Flexible Data Placement Supported: Not Supported 00:14:28.822 00:14:28.822 Controller Memory Buffer Support 00:14:28.822 ================================ 00:14:28.822 Supported: No 00:14:28.822 00:14:28.822 Persistent Memory Region Support 00:14:28.822 ================================ 00:14:28.822 Supported: No 00:14:28.822 00:14:28.822 Admin Command Set Attributes 00:14:28.822 ============================ 00:14:28.822 Security Send/Receive: Not Supported 00:14:28.822 Format NVM: Not Supported 00:14:28.822 Firmware Activate/Download: Not Supported 00:14:28.822 Namespace Management: Not Supported 00:14:28.822 Device Self-Test: Not Supported 00:14:28.822 Directives: Not Supported 00:14:28.822 NVMe-MI: Not Supported 00:14:28.822 Virtualization Management: Not Supported 00:14:28.822 Doorbell Buffer Config: Not Supported 00:14:28.822 Get LBA Status Capability: Not Supported 00:14:28.822 Command & Feature Lockdown Capability: Not Supported 00:14:28.822 Abort Command Limit: 4 00:14:28.822 Async Event Request Limit: 4 00:14:28.822 Number of Firmware Slots: N/A 00:14:28.822 Firmware Slot 1 Read-Only: N/A 00:14:28.822 Firmware Activation Without Reset: [2024-12-06 11:10:39.933246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.822 [2024-12-06 11:10:39.933253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.822 [2024-12-06 11:10:39.933256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8dee20) on tqpair=0x892510 00:14:28.822 [2024-12-06 11:10:39.933273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.822 [2024-12-06 11:10:39.933279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.822 [2024-12-06 11:10:39.933283] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.822 [2024-12-06 11:10:39.933286] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8df0e0) on tqpair=0x892510 00:14:28.822 [2024-12-06 11:10:39.933294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.823 [2024-12-06 11:10:39.933300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.823 [2024-12-06 11:10:39.933304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.933307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8df240) on tqpair=0x892510 00:14:28.823 N/A 00:14:28.823 Multiple Update Detection Support: N/A 00:14:28.823 Firmware Update Granularity: No Information Provided 00:14:28.823 Per-Namespace SMART Log: No 00:14:28.823 Asymmetric Namespace Access Log Page: Not Supported 00:14:28.823 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:28.823 Command Effects Log Page: Supported 00:14:28.823 Get Log Page Extended Data: Supported 00:14:28.823 Telemetry Log Pages: Not Supported 00:14:28.823 Persistent Event Log Pages: Not Supported 00:14:28.823 Supported Log Pages Log Page: May Support 00:14:28.823 Commands Supported & Effects Log Page: Not Supported 00:14:28.823 Feature Identifiers & Effects Log Page:May Support 00:14:28.823 NVMe-MI Commands & Effects Log Page: May Support 00:14:28.823 Data Area 4 for Telemetry Log: Not Supported 00:14:28.823 Error Log Page Entries Supported: 128 00:14:28.823 Keep Alive: Supported 00:14:28.823 Keep Alive Granularity: 10000 ms 00:14:28.823 00:14:28.823 NVM Command Set Attributes 00:14:28.823 ========================== 00:14:28.823 Submission Queue Entry Size 00:14:28.823 Max: 64 00:14:28.823 Min: 64 00:14:28.823 Completion Queue Entry Size 00:14:28.823 Max: 16 00:14:28.823 Min: 16 00:14:28.823 Number of Namespaces: 32 00:14:28.823 Compare Command: Supported 00:14:28.823 Write Uncorrectable Command: Not Supported 00:14:28.823 Dataset Management Command: Supported 00:14:28.823 Write Zeroes Command: Supported 00:14:28.823 Set Features Save Field: Not Supported 00:14:28.823 Reservations: Supported 00:14:28.823 Timestamp: Not Supported 00:14:28.823 Copy: Supported 00:14:28.823 Volatile Write Cache: Present 00:14:28.823 Atomic Write Unit (Normal): 1 00:14:28.823 Atomic Write Unit (PFail): 1 00:14:28.823 Atomic Compare & Write Unit: 1 00:14:28.823 Fused Compare & Write: Supported 00:14:28.823 Scatter-Gather List 00:14:28.823 SGL Command Set: Supported 00:14:28.823 SGL Keyed: Supported 00:14:28.823 SGL Bit Bucket Descriptor: Not Supported 00:14:28.823 SGL Metadata Pointer: Not Supported 00:14:28.823 Oversized SGL: Not Supported 00:14:28.823 SGL Metadata Address: Not Supported 00:14:28.823 SGL Offset: Supported 00:14:28.823 Transport SGL Data Block: Not Supported 00:14:28.823 Replay Protected Memory Block: Not Supported 00:14:28.823 00:14:28.823 Firmware Slot Information 00:14:28.823 ========================= 00:14:28.823 Active slot: 1 00:14:28.823 Slot 1 Firmware Revision: 24.01.1 00:14:28.823 00:14:28.823 00:14:28.823 Commands Supported and Effects 00:14:28.823 ============================== 00:14:28.823 Admin Commands 00:14:28.823 -------------- 00:14:28.823 Get Log Page (02h): Supported 00:14:28.823 Identify (06h): Supported 00:14:28.823 Abort (08h): Supported 00:14:28.823 Set Features (09h): Supported 00:14:28.823 Get Features (0Ah): Supported 00:14:28.823 Asynchronous Event Request (0Ch): Supported 00:14:28.823 Keep Alive (18h): Supported 00:14:28.823 I/O Commands 00:14:28.823 ------------ 00:14:28.823 Flush (00h): Supported LBA-Change 00:14:28.823 Write (01h): Supported LBA-Change 00:14:28.823 Read (02h): Supported 00:14:28.823 Compare (05h): Supported 00:14:28.823 Write Zeroes (08h): Supported LBA-Change 00:14:28.823 Dataset Management (09h): Supported LBA-Change 00:14:28.823 Copy (19h): Supported LBA-Change 00:14:28.823 Unknown (79h): Supported LBA-Change 00:14:28.823 Unknown (7Ah): Supported 00:14:28.823 00:14:28.823 Error Log 00:14:28.823 ========= 00:14:28.823 00:14:28.823 Arbitration 00:14:28.823 =========== 00:14:28.823 Arbitration Burst: 1 00:14:28.823 00:14:28.823 Power Management 00:14:28.823 ================ 00:14:28.823 Number of Power States: 1 00:14:28.823 Current Power State: Power State #0 00:14:28.823 Power State #0: 00:14:28.823 Max Power: 0.00 W 00:14:28.823 Non-Operational State: Operational 00:14:28.823 Entry Latency: Not Reported 00:14:28.823 Exit Latency: Not Reported 00:14:28.823 Relative Read Throughput: 0 00:14:28.823 Relative Read Latency: 0 00:14:28.823 Relative Write Throughput: 0 00:14:28.823 Relative Write Latency: 0 00:14:28.823 Idle Power: Not Reported 00:14:28.823 Active Power: Not Reported 00:14:28.823 Non-Operational Permissive Mode: Not Supported 00:14:28.823 00:14:28.823 Health Information 00:14:28.823 ================== 00:14:28.823 Critical Warnings: 00:14:28.823 Available Spare Space: OK 00:14:28.823 Temperature: OK 00:14:28.823 Device Reliability: OK 00:14:28.823 Read Only: No 00:14:28.823 Volatile Memory Backup: OK 00:14:28.823 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:28.823 Temperature Threshold: [2024-12-06 11:10:39.933416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.933424] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.933428] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x892510) 00:14:28.823 [2024-12-06 11:10:39.933436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.823 [2024-12-06 11:10:39.933461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8df240, cid 7, qid 0 00:14:28.823 [2024-12-06 11:10:39.933507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.823 [2024-12-06 11:10:39.933514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.823 [2024-12-06 11:10:39.933518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.933522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8df240) on tqpair=0x892510 00:14:28.823 [2024-12-06 11:10:39.936617] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:28.823 [2024-12-06 11:10:39.936649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.823 [2024-12-06 11:10:39.936659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.823 [2024-12-06 11:10:39.936665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.823 [2024-12-06 11:10:39.936672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.823 [2024-12-06 11:10:39.936682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936691] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.823 [2024-12-06 11:10:39.936699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.823 [2024-12-06 11:10:39.936725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.823 [2024-12-06 11:10:39.936775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.823 [2024-12-06 11:10:39.936782] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.823 [2024-12-06 11:10:39.936786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.823 [2024-12-06 11:10:39.936798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.823 [2024-12-06 11:10:39.936814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.823 [2024-12-06 11:10:39.936836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.823 [2024-12-06 11:10:39.936903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.823 [2024-12-06 11:10:39.936921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.823 [2024-12-06 11:10:39.936925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.823 [2024-12-06 11:10:39.936930] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.936935] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:28.824 [2024-12-06 11:10:39.936940] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:28.824 [2024-12-06 11:10:39.936951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.936956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.936959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.936967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.936985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937071] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937075] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937099] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937381] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937572] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937685] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937695] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937703] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937892] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.937895] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.937909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.937918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.937925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.937941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.937990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.937997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.938001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.938015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.938031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.938047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.824 [2024-12-06 11:10:39.938096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.824 [2024-12-06 11:10:39.938102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.824 [2024-12-06 11:10:39.938106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.824 [2024-12-06 11:10:39.938120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.824 [2024-12-06 11:10:39.938129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.824 [2024-12-06 11:10:39.938136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.824 [2024-12-06 11:10:39.938152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938208] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938307] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938356] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938773] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938866] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.938892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.938916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.938923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.938939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.938986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.938992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.938996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.939011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.939026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.939043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.939086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.939092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.939096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.939110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939118] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.939126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.939142] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.939221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.939230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.939234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.939250] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.939267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.939286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.939333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.939340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.939344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.939359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.939377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.825 [2024-12-06 11:10:39.939394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.825 [2024-12-06 11:10:39.939441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.825 [2024-12-06 11:10:39.939448] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.825 [2024-12-06 11:10:39.939452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.825 [2024-12-06 11:10:39.939467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.825 [2024-12-06 11:10:39.939476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.825 [2024-12-06 11:10:39.939484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.939528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.939583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.939602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.939607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.939623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.939640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.939659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.939711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.939717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.939721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.939736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.939753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.939769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.939820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.939826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.939830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.939845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.939861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.939878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.939943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.939950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.939954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.939969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.939978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.939986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.940003] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.940052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.940059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.940063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.940078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.940095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.940113] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.940159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.940166] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.940170] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.940185] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.940202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.940219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.940265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.940272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.940276] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.940307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.940338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.940354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.940397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.940404] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.940408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940412] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.940422] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.940438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.940454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.940500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.940511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.940515] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.940520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.940530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.943605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.943622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x892510) 00:14:28.826 [2024-12-06 11:10:39.943649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:28.826 [2024-12-06 11:10:39.943675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8decc0, cid 3, qid 0 00:14:28.826 [2024-12-06 11:10:39.943728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:28.826 [2024-12-06 11:10:39.943735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:28.826 [2024-12-06 11:10:39.943739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:28.826 [2024-12-06 11:10:39.943743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8decc0) on tqpair=0x892510 00:14:28.826 [2024-12-06 11:10:39.943752] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:14:29.086 0 Kelvin (-273 Celsius) 00:14:29.086 Available Spare: 0% 00:14:29.086 Available Spare Threshold: 0% 00:14:29.086 Life Percentage Used: 0% 00:14:29.086 Data Units Read: 0 00:14:29.086 Data Units Written: 0 00:14:29.086 Host Read Commands: 0 00:14:29.086 Host Write Commands: 0 00:14:29.086 Controller Busy Time: 0 minutes 00:14:29.086 Power Cycles: 0 00:14:29.086 Power On Hours: 0 hours 00:14:29.086 Unsafe Shutdowns: 0 00:14:29.086 Unrecoverable Media Errors: 0 00:14:29.086 Lifetime Error Log Entries: 0 00:14:29.086 Warning Temperature Time: 0 minutes 00:14:29.086 Critical Temperature Time: 0 minutes 00:14:29.086 00:14:29.086 Number of Queues 00:14:29.086 ================ 00:14:29.086 Number of I/O Submission Queues: 127 00:14:29.087 Number of I/O Completion Queues: 127 00:14:29.087 00:14:29.087 Active Namespaces 00:14:29.087 ================= 00:14:29.087 Namespace ID:1 00:14:29.087 Error Recovery Timeout: Unlimited 00:14:29.087 Command Set Identifier: NVM (00h) 00:14:29.087 Deallocate: Supported 00:14:29.087 Deallocated/Unwritten Error: Not Supported 00:14:29.087 Deallocated Read Value: Unknown 00:14:29.087 Deallocate in Write Zeroes: Not Supported 00:14:29.087 Deallocated Guard Field: 0xFFFF 00:14:29.087 Flush: Supported 00:14:29.087 Reservation: Supported 00:14:29.087 Namespace Sharing Capabilities: Multiple Controllers 00:14:29.087 Size (in LBAs): 131072 (0GiB) 00:14:29.087 Capacity (in LBAs): 131072 (0GiB) 00:14:29.087 Utilization (in LBAs): 131072 (0GiB) 00:14:29.087 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:29.087 EUI64: ABCDEF0123456789 00:14:29.087 UUID: 2ded56a9-a535-4bfd-a56b-1c866c8409db 00:14:29.087 Thin Provisioning: Not Supported 00:14:29.087 Per-NS Atomic Units: Yes 00:14:29.087 Atomic Boundary Size (Normal): 0 00:14:29.087 Atomic Boundary Size (PFail): 0 00:14:29.087 Atomic Boundary Offset: 0 00:14:29.087 Maximum Single Source Range Length: 65535 00:14:29.087 Maximum Copy Length: 65535 00:14:29.087 Maximum Source Range Count: 1 00:14:29.087 NGUID/EUI64 Never Reused: No 00:14:29.087 Namespace Write Protected: No 00:14:29.087 Number of LBA Formats: 1 00:14:29.087 Current LBA Format: LBA Format #00 00:14:29.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:29.087 00:14:29.087 11:10:39 -- host/identify.sh@51 -- # sync 00:14:29.087 11:10:39 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.087 11:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.087 11:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:29.087 11:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.087 11:10:40 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:29.087 11:10:40 -- host/identify.sh@56 -- # nvmftestfini 00:14:29.087 11:10:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:29.087 11:10:40 -- nvmf/common.sh@116 -- # sync 00:14:29.087 11:10:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:29.087 11:10:40 -- nvmf/common.sh@119 -- # set +e 00:14:29.087 11:10:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:29.087 11:10:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:29.087 rmmod nvme_tcp 00:14:29.087 rmmod nvme_fabrics 00:14:29.087 rmmod nvme_keyring 00:14:29.087 11:10:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:29.087 11:10:40 -- nvmf/common.sh@123 -- # set -e 00:14:29.087 11:10:40 -- nvmf/common.sh@124 -- # return 0 00:14:29.087 11:10:40 -- nvmf/common.sh@477 -- # '[' -n 80328 ']' 00:14:29.087 11:10:40 -- nvmf/common.sh@478 -- # killprocess 80328 00:14:29.087 11:10:40 -- common/autotest_common.sh@936 -- # '[' -z 80328 ']' 00:14:29.087 11:10:40 -- common/autotest_common.sh@940 -- # kill -0 80328 00:14:29.087 11:10:40 -- common/autotest_common.sh@941 -- # uname 00:14:29.087 11:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.087 11:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80328 00:14:29.087 11:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.087 11:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.087 killing process with pid 80328 00:14:29.087 11:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80328' 00:14:29.087 11:10:40 -- common/autotest_common.sh@955 -- # kill 80328 00:14:29.087 [2024-12-06 11:10:40.118306] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:29.087 11:10:40 -- common/autotest_common.sh@960 -- # wait 80328 00:14:29.346 11:10:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:29.346 11:10:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:29.346 11:10:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:29.346 11:10:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.346 11:10:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:29.346 11:10:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.346 11:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.346 11:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.346 11:10:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:29.346 00:14:29.346 real 0m2.457s 00:14:29.346 user 0m6.893s 00:14:29.346 sys 0m0.587s 00:14:29.346 11:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.346 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:14:29.346 ************************************ 00:14:29.346 END TEST nvmf_identify 00:14:29.346 ************************************ 00:14:29.346 11:10:40 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:29.346 11:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.346 11:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.346 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:14:29.346 ************************************ 00:14:29.346 START TEST nvmf_perf 00:14:29.346 ************************************ 00:14:29.346 11:10:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:29.346 * Looking for test storage... 00:14:29.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:29.346 11:10:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.346 11:10:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.346 11:10:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.606 11:10:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.606 11:10:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.606 11:10:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.606 11:10:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.606 11:10:40 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.606 11:10:40 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.606 11:10:40 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.606 11:10:40 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.606 11:10:40 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.606 11:10:40 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.606 11:10:40 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.606 11:10:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.606 11:10:40 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.606 11:10:40 -- scripts/common.sh@344 -- # : 1 00:14:29.606 11:10:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.606 11:10:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.606 11:10:40 -- scripts/common.sh@364 -- # decimal 1 00:14:29.606 11:10:40 -- scripts/common.sh@352 -- # local d=1 00:14:29.606 11:10:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.606 11:10:40 -- scripts/common.sh@354 -- # echo 1 00:14:29.606 11:10:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.606 11:10:40 -- scripts/common.sh@365 -- # decimal 2 00:14:29.606 11:10:40 -- scripts/common.sh@352 -- # local d=2 00:14:29.606 11:10:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.606 11:10:40 -- scripts/common.sh@354 -- # echo 2 00:14:29.606 11:10:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.606 11:10:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.606 11:10:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.606 11:10:40 -- scripts/common.sh@367 -- # return 0 00:14:29.606 11:10:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.606 11:10:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.606 --rc genhtml_branch_coverage=1 00:14:29.606 --rc genhtml_function_coverage=1 00:14:29.606 --rc genhtml_legend=1 00:14:29.606 --rc geninfo_all_blocks=1 00:14:29.606 --rc geninfo_unexecuted_blocks=1 00:14:29.606 00:14:29.606 ' 00:14:29.606 11:10:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.606 --rc genhtml_branch_coverage=1 00:14:29.606 --rc genhtml_function_coverage=1 00:14:29.606 --rc genhtml_legend=1 00:14:29.606 --rc geninfo_all_blocks=1 00:14:29.606 --rc geninfo_unexecuted_blocks=1 00:14:29.606 00:14:29.606 ' 00:14:29.606 11:10:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.606 --rc genhtml_branch_coverage=1 00:14:29.606 --rc genhtml_function_coverage=1 00:14:29.606 --rc genhtml_legend=1 00:14:29.606 --rc geninfo_all_blocks=1 00:14:29.606 --rc geninfo_unexecuted_blocks=1 00:14:29.606 00:14:29.606 ' 00:14:29.606 11:10:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.606 --rc genhtml_branch_coverage=1 00:14:29.606 --rc genhtml_function_coverage=1 00:14:29.606 --rc genhtml_legend=1 00:14:29.606 --rc geninfo_all_blocks=1 00:14:29.606 --rc geninfo_unexecuted_blocks=1 00:14:29.606 00:14:29.606 ' 00:14:29.606 11:10:40 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.606 11:10:40 -- nvmf/common.sh@7 -- # uname -s 00:14:29.606 11:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.606 11:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.606 11:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.606 11:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.606 11:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.606 11:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.606 11:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.606 11:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.606 11:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.606 11:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.606 11:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:14:29.606 11:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:14:29.606 11:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.606 11:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.606 11:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.606 11:10:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.606 11:10:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.606 11:10:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.606 11:10:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.606 11:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.606 11:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.607 11:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.607 11:10:40 -- paths/export.sh@5 -- # export PATH 00:14:29.607 11:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.607 11:10:40 -- nvmf/common.sh@46 -- # : 0 00:14:29.607 11:10:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:29.607 11:10:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:29.607 11:10:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:29.607 11:10:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.607 11:10:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.607 11:10:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:29.607 11:10:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:29.607 11:10:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:29.607 11:10:40 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:29.607 11:10:40 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:29.607 11:10:40 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.607 11:10:40 -- host/perf.sh@17 -- # nvmftestinit 00:14:29.607 11:10:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:29.607 11:10:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.607 11:10:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:29.607 11:10:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:29.607 11:10:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:29.607 11:10:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.607 11:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.607 11:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.607 11:10:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:29.607 11:10:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:29.607 11:10:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:29.607 11:10:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:29.607 11:10:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:29.607 11:10:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:29.607 11:10:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.607 11:10:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.607 11:10:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:29.607 11:10:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:29.607 11:10:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:29.607 11:10:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:29.607 11:10:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:29.607 11:10:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.607 11:10:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:29.607 11:10:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:29.607 11:10:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:29.607 11:10:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:29.607 11:10:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:29.607 11:10:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:29.607 Cannot find device "nvmf_tgt_br" 00:14:29.607 11:10:40 -- nvmf/common.sh@154 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.607 Cannot find device "nvmf_tgt_br2" 00:14:29.607 11:10:40 -- nvmf/common.sh@155 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:29.607 11:10:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:29.607 Cannot find device "nvmf_tgt_br" 00:14:29.607 11:10:40 -- nvmf/common.sh@157 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:29.607 Cannot find device "nvmf_tgt_br2" 00:14:29.607 11:10:40 -- nvmf/common.sh@158 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:29.607 11:10:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:29.607 11:10:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.607 11:10:40 -- nvmf/common.sh@161 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.607 11:10:40 -- nvmf/common.sh@162 -- # true 00:14:29.607 11:10:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:29.607 11:10:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:29.607 11:10:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:29.607 11:10:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:29.607 11:10:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:29.607 11:10:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:29.866 11:10:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:29.866 11:10:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:29.866 11:10:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:29.866 11:10:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:29.866 11:10:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:29.867 11:10:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:29.867 11:10:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:29.867 11:10:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:29.867 11:10:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:29.867 11:10:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:29.867 11:10:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:29.867 11:10:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:29.867 11:10:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:29.867 11:10:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:29.867 11:10:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.867 11:10:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.867 11:10:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.867 11:10:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:29.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:14:29.867 00:14:29.867 --- 10.0.0.2 ping statistics --- 00:14:29.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.867 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:29.867 11:10:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:29.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:29.867 00:14:29.867 --- 10.0.0.3 ping statistics --- 00:14:29.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.867 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:29.867 11:10:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:29.867 00:14:29.867 --- 10.0.0.1 ping statistics --- 00:14:29.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.867 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:29.867 11:10:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.867 11:10:40 -- nvmf/common.sh@421 -- # return 0 00:14:29.867 11:10:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:29.867 11:10:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.867 11:10:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:29.867 11:10:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:29.867 11:10:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.867 11:10:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:29.867 11:10:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:29.867 11:10:40 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:29.867 11:10:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:29.867 11:10:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.867 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:14:29.867 11:10:40 -- nvmf/common.sh@469 -- # nvmfpid=80543 00:14:29.867 11:10:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.867 11:10:40 -- nvmf/common.sh@470 -- # waitforlisten 80543 00:14:29.867 11:10:40 -- common/autotest_common.sh@829 -- # '[' -z 80543 ']' 00:14:29.867 11:10:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.867 11:10:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.867 11:10:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.867 11:10:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.867 11:10:40 -- common/autotest_common.sh@10 -- # set +x 00:14:29.867 [2024-12-06 11:10:40.964475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:29.867 [2024-12-06 11:10:40.964597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.126 [2024-12-06 11:10:41.103477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.126 [2024-12-06 11:10:41.143338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.126 [2024-12-06 11:10:41.143511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.126 [2024-12-06 11:10:41.143527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.126 [2024-12-06 11:10:41.143552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.126 [2024-12-06 11:10:41.143655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.126 [2024-12-06 11:10:41.143814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.126 [2024-12-06 11:10:41.144420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.126 [2024-12-06 11:10:41.144470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.063 11:10:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.063 11:10:41 -- common/autotest_common.sh@862 -- # return 0 00:14:31.063 11:10:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.063 11:10:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.063 11:10:41 -- common/autotest_common.sh@10 -- # set +x 00:14:31.063 11:10:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.063 11:10:42 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:31.063 11:10:42 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:31.322 11:10:42 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:31.323 11:10:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:31.581 11:10:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:31.581 11:10:42 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.839 11:10:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:31.839 11:10:42 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:31.839 11:10:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:31.839 11:10:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:31.839 11:10:42 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:32.096 [2024-12-06 11:10:43.182357] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.096 11:10:43 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.353 11:10:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:32.353 11:10:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.611 11:10:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:32.611 11:10:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:32.869 11:10:43 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.127 [2024-12-06 11:10:44.159744] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.127 11:10:44 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.385 11:10:44 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:33.385 11:10:44 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:33.386 11:10:44 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:33.386 11:10:44 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:34.761 Initializing NVMe Controllers 00:14:34.761 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:34.761 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:34.761 Initialization complete. Launching workers. 00:14:34.761 ======================================================== 00:14:34.761 Latency(us) 00:14:34.761 Device Information : IOPS MiB/s Average min max 00:14:34.761 PCIE (0000:00:06.0) NSID 1 from core 0: 22801.02 89.07 1403.64 407.85 8070.54 00:14:34.761 ======================================================== 00:14:34.761 Total : 22801.02 89.07 1403.64 407.85 8070.54 00:14:34.761 00:14:34.761 11:10:45 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:35.695 Initializing NVMe Controllers 00:14:35.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:35.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.695 Initialization complete. Launching workers. 00:14:35.695 ======================================================== 00:14:35.695 Latency(us) 00:14:35.695 Device Information : IOPS MiB/s Average min max 00:14:35.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3596.23 14.05 277.80 100.73 7246.41 00:14:35.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.63 0.49 8079.57 4904.96 14983.93 00:14:35.695 ======================================================== 00:14:35.695 Total : 3720.86 14.53 539.11 100.73 14983.93 00:14:35.695 00:14:35.695 11:10:46 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:37.069 Initializing NVMe Controllers 00:14:37.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:37.069 Initialization complete. Launching workers. 00:14:37.069 ======================================================== 00:14:37.069 Latency(us) 00:14:37.069 Device Information : IOPS MiB/s Average min max 00:14:37.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8642.47 33.76 3705.11 451.41 10477.28 00:14:37.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3978.92 15.54 8065.99 5070.51 16364.21 00:14:37.069 ======================================================== 00:14:37.069 Total : 12621.39 49.30 5079.88 451.41 16364.21 00:14:37.069 00:14:37.069 11:10:48 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:37.069 11:10:48 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:39.620 Initializing NVMe Controllers 00:14:39.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.620 Controller IO queue size 128, less than required. 00:14:39.620 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.620 Controller IO queue size 128, less than required. 00:14:39.620 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:39.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:39.620 Initialization complete. Launching workers. 00:14:39.620 ======================================================== 00:14:39.620 Latency(us) 00:14:39.620 Device Information : IOPS MiB/s Average min max 00:14:39.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1974.81 493.70 65454.14 35298.10 102577.71 00:14:39.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 668.44 167.11 201711.91 103210.06 338941.93 00:14:39.621 ======================================================== 00:14:39.621 Total : 2643.25 660.81 99911.61 35298.10 338941.93 00:14:39.621 00:14:39.621 11:10:50 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:39.879 No valid NVMe controllers or AIO or URING devices found 00:14:39.879 Initializing NVMe Controllers 00:14:39.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.879 Controller IO queue size 128, less than required. 00:14:39.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.879 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:39.879 Controller IO queue size 128, less than required. 00:14:39.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.879 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:39.879 WARNING: Some requested NVMe devices were skipped 00:14:39.879 11:10:50 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:42.406 Initializing NVMe Controllers 00:14:42.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.406 Controller IO queue size 128, less than required. 00:14:42.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.406 Controller IO queue size 128, less than required. 00:14:42.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:42.406 Initialization complete. Launching workers. 00:14:42.406 00:14:42.406 ==================== 00:14:42.406 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:42.406 TCP transport: 00:14:42.406 polls: 7367 00:14:42.406 idle_polls: 0 00:14:42.406 sock_completions: 7367 00:14:42.406 nvme_completions: 6623 00:14:42.406 submitted_requests: 10061 00:14:42.406 queued_requests: 1 00:14:42.406 00:14:42.406 ==================== 00:14:42.406 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:42.406 TCP transport: 00:14:42.406 polls: 8175 00:14:42.406 idle_polls: 0 00:14:42.406 sock_completions: 8175 00:14:42.406 nvme_completions: 6635 00:14:42.406 submitted_requests: 10232 00:14:42.406 queued_requests: 1 00:14:42.406 ======================================================== 00:14:42.406 Latency(us) 00:14:42.406 Device Information : IOPS MiB/s Average min max 00:14:42.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.39 429.60 75339.85 39834.08 135454.69 00:14:42.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1720.89 430.22 75626.54 37385.24 123651.95 00:14:42.406 ======================================================== 00:14:42.406 Total : 3439.27 859.82 75483.30 37385.24 135454.69 00:14:42.406 00:14:42.406 11:10:53 -- host/perf.sh@66 -- # sync 00:14:42.406 11:10:53 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.664 11:10:53 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:42.664 11:10:53 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:42.664 11:10:53 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:43.229 11:10:54 -- host/perf.sh@72 -- # ls_guid=63bfed16-639d-456d-82ed-4f688cd2a42d 00:14:43.229 11:10:54 -- host/perf.sh@73 -- # get_lvs_free_mb 63bfed16-639d-456d-82ed-4f688cd2a42d 00:14:43.229 11:10:54 -- common/autotest_common.sh@1353 -- # local lvs_uuid=63bfed16-639d-456d-82ed-4f688cd2a42d 00:14:43.229 11:10:54 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:43.229 11:10:54 -- common/autotest_common.sh@1355 -- # local fc 00:14:43.229 11:10:54 -- common/autotest_common.sh@1356 -- # local cs 00:14:43.229 11:10:54 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:43.229 11:10:54 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:43.229 { 00:14:43.229 "uuid": "63bfed16-639d-456d-82ed-4f688cd2a42d", 00:14:43.229 "name": "lvs_0", 00:14:43.229 "base_bdev": "Nvme0n1", 00:14:43.229 "total_data_clusters": 1278, 00:14:43.229 "free_clusters": 1278, 00:14:43.229 "block_size": 4096, 00:14:43.229 "cluster_size": 4194304 00:14:43.229 } 00:14:43.229 ]' 00:14:43.229 11:10:54 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="63bfed16-639d-456d-82ed-4f688cd2a42d") .free_clusters' 00:14:43.488 11:10:54 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:43.488 11:10:54 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="63bfed16-639d-456d-82ed-4f688cd2a42d") .cluster_size' 00:14:43.488 5112 00:14:43.488 11:10:54 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:43.488 11:10:54 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:43.488 11:10:54 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:43.488 11:10:54 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:43.488 11:10:54 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 63bfed16-639d-456d-82ed-4f688cd2a42d lbd_0 5112 00:14:43.745 11:10:54 -- host/perf.sh@80 -- # lb_guid=f1ffa454-450d-4298-867d-8cdccb19d651 00:14:43.745 11:10:54 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f1ffa454-450d-4298-867d-8cdccb19d651 lvs_n_0 00:14:44.003 11:10:55 -- host/perf.sh@83 -- # ls_nested_guid=48641bb6-5003-4af3-9197-bc8c064212ab 00:14:44.003 11:10:55 -- host/perf.sh@84 -- # get_lvs_free_mb 48641bb6-5003-4af3-9197-bc8c064212ab 00:14:44.003 11:10:55 -- common/autotest_common.sh@1353 -- # local lvs_uuid=48641bb6-5003-4af3-9197-bc8c064212ab 00:14:44.003 11:10:55 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:44.003 11:10:55 -- common/autotest_common.sh@1355 -- # local fc 00:14:44.003 11:10:55 -- common/autotest_common.sh@1356 -- # local cs 00:14:44.003 11:10:55 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:44.260 11:10:55 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:44.260 { 00:14:44.260 "uuid": "63bfed16-639d-456d-82ed-4f688cd2a42d", 00:14:44.260 "name": "lvs_0", 00:14:44.260 "base_bdev": "Nvme0n1", 00:14:44.260 "total_data_clusters": 1278, 00:14:44.260 "free_clusters": 0, 00:14:44.260 "block_size": 4096, 00:14:44.260 "cluster_size": 4194304 00:14:44.260 }, 00:14:44.260 { 00:14:44.260 "uuid": "48641bb6-5003-4af3-9197-bc8c064212ab", 00:14:44.260 "name": "lvs_n_0", 00:14:44.260 "base_bdev": "f1ffa454-450d-4298-867d-8cdccb19d651", 00:14:44.260 "total_data_clusters": 1276, 00:14:44.260 "free_clusters": 1276, 00:14:44.260 "block_size": 4096, 00:14:44.260 "cluster_size": 4194304 00:14:44.260 } 00:14:44.260 ]' 00:14:44.260 11:10:55 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="48641bb6-5003-4af3-9197-bc8c064212ab") .free_clusters' 00:14:44.260 11:10:55 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:44.260 11:10:55 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="48641bb6-5003-4af3-9197-bc8c064212ab") .cluster_size' 00:14:44.260 11:10:55 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:44.526 5104 00:14:44.526 11:10:55 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:44.526 11:10:55 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:44.526 11:10:55 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:44.526 11:10:55 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 48641bb6-5003-4af3-9197-bc8c064212ab lbd_nest_0 5104 00:14:44.526 11:10:55 -- host/perf.sh@88 -- # lb_nested_guid=ba74e5cf-a08a-4651-8f0f-bfcda03c5ca5 00:14:44.526 11:10:55 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.784 11:10:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:44.784 11:10:55 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ba74e5cf-a08a-4651-8f0f-bfcda03c5ca5 00:14:45.043 11:10:56 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.302 11:10:56 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:45.302 11:10:56 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:45.302 11:10:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:45.302 11:10:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.302 11:10:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.560 No valid NVMe controllers or AIO or URING devices found 00:14:45.819 Initializing NVMe Controllers 00:14:45.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.819 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:45.819 WARNING: Some requested NVMe devices were skipped 00:14:45.819 11:10:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.819 11:10:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:55.821 Initializing NVMe Controllers 00:14:55.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:55.821 Initialization complete. Launching workers. 00:14:55.821 ======================================================== 00:14:55.821 Latency(us) 00:14:55.821 Device Information : IOPS MiB/s Average min max 00:14:55.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 962.80 120.35 1037.84 323.45 8499.37 00:14:55.821 ======================================================== 00:14:55.821 Total : 962.80 120.35 1037.84 323.45 8499.37 00:14:55.821 00:14:55.821 11:11:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:55.821 11:11:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:55.821 11:11:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:56.386 No valid NVMe controllers or AIO or URING devices found 00:14:56.386 Initializing NVMe Controllers 00:14:56.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.386 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:56.386 WARNING: Some requested NVMe devices were skipped 00:14:56.386 11:11:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:56.386 11:11:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:08.587 Initializing NVMe Controllers 00:15:08.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.587 Initialization complete. Launching workers. 00:15:08.587 ======================================================== 00:15:08.587 Latency(us) 00:15:08.587 Device Information : IOPS MiB/s Average min max 00:15:08.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1358.90 169.86 23584.22 5975.69 56209.04 00:15:08.587 ======================================================== 00:15:08.587 Total : 1358.90 169.86 23584.22 5975.69 56209.04 00:15:08.587 00:15:08.587 11:11:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:08.587 11:11:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:08.587 11:11:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:08.587 No valid NVMe controllers or AIO or URING devices found 00:15:08.587 Initializing NVMe Controllers 00:15:08.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.587 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:08.587 WARNING: Some requested NVMe devices were skipped 00:15:08.587 11:11:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:08.587 11:11:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:18.568 Initializing NVMe Controllers 00:15:18.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.568 Controller IO queue size 128, less than required. 00:15:18.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:18.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:18.568 Initialization complete. Launching workers. 00:15:18.568 ======================================================== 00:15:18.568 Latency(us) 00:15:18.568 Device Information : IOPS MiB/s Average min max 00:15:18.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4024.67 503.08 31866.79 13686.27 64120.18 00:15:18.568 ======================================================== 00:15:18.568 Total : 4024.67 503.08 31866.79 13686.27 64120.18 00:15:18.568 00:15:18.568 11:11:28 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.568 11:11:28 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ba74e5cf-a08a-4651-8f0f-bfcda03c5ca5 00:15:18.568 11:11:28 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:18.568 11:11:29 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f1ffa454-450d-4298-867d-8cdccb19d651 00:15:18.568 11:11:29 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:18.827 11:11:29 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:18.827 11:11:29 -- host/perf.sh@114 -- # nvmftestfini 00:15:18.827 11:11:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.827 11:11:29 -- nvmf/common.sh@116 -- # sync 00:15:18.827 11:11:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:18.827 11:11:29 -- nvmf/common.sh@119 -- # set +e 00:15:18.827 11:11:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.827 11:11:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:18.827 rmmod nvme_tcp 00:15:18.827 rmmod nvme_fabrics 00:15:18.827 rmmod nvme_keyring 00:15:18.827 11:11:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.827 11:11:29 -- nvmf/common.sh@123 -- # set -e 00:15:18.827 11:11:29 -- nvmf/common.sh@124 -- # return 0 00:15:18.827 11:11:29 -- nvmf/common.sh@477 -- # '[' -n 80543 ']' 00:15:18.827 11:11:29 -- nvmf/common.sh@478 -- # killprocess 80543 00:15:18.827 11:11:29 -- common/autotest_common.sh@936 -- # '[' -z 80543 ']' 00:15:18.827 11:11:29 -- common/autotest_common.sh@940 -- # kill -0 80543 00:15:18.827 11:11:29 -- common/autotest_common.sh@941 -- # uname 00:15:18.827 11:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.827 11:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80543 00:15:18.827 killing process with pid 80543 00:15:18.827 11:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.827 11:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.827 11:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80543' 00:15:18.827 11:11:29 -- common/autotest_common.sh@955 -- # kill 80543 00:15:18.827 11:11:29 -- common/autotest_common.sh@960 -- # wait 80543 00:15:20.204 11:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.204 11:11:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.204 11:11:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.204 11:11:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.204 11:11:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.204 11:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.204 11:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.204 11:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.204 11:11:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:20.204 ************************************ 00:15:20.204 END TEST nvmf_perf 00:15:20.204 ************************************ 00:15:20.204 00:15:20.204 real 0m50.674s 00:15:20.204 user 3m11.131s 00:15:20.204 sys 0m12.854s 00:15:20.204 11:11:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.204 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:15:20.204 11:11:31 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.204 11:11:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.204 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:15:20.204 ************************************ 00:15:20.204 START TEST nvmf_fio_host 00:15:20.204 ************************************ 00:15:20.204 11:11:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.204 * Looking for test storage... 00:15:20.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:20.204 11:11:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:20.204 11:11:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:20.204 11:11:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.204 11:11:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.204 11:11:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.204 11:11:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.204 11:11:31 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.204 11:11:31 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.204 11:11:31 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.204 11:11:31 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.204 11:11:31 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.204 11:11:31 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.204 11:11:31 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.204 11:11:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.204 11:11:31 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.204 11:11:31 -- scripts/common.sh@344 -- # : 1 00:15:20.204 11:11:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.204 11:11:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.204 11:11:31 -- scripts/common.sh@364 -- # decimal 1 00:15:20.204 11:11:31 -- scripts/common.sh@352 -- # local d=1 00:15:20.204 11:11:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.204 11:11:31 -- scripts/common.sh@354 -- # echo 1 00:15:20.204 11:11:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.204 11:11:31 -- scripts/common.sh@365 -- # decimal 2 00:15:20.204 11:11:31 -- scripts/common.sh@352 -- # local d=2 00:15:20.204 11:11:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.204 11:11:31 -- scripts/common.sh@354 -- # echo 2 00:15:20.204 11:11:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.204 11:11:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.204 11:11:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.204 11:11:31 -- scripts/common.sh@367 -- # return 0 00:15:20.204 11:11:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.204 --rc genhtml_branch_coverage=1 00:15:20.204 --rc genhtml_function_coverage=1 00:15:20.204 --rc genhtml_legend=1 00:15:20.204 --rc geninfo_all_blocks=1 00:15:20.204 --rc geninfo_unexecuted_blocks=1 00:15:20.204 00:15:20.204 ' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.204 --rc genhtml_branch_coverage=1 00:15:20.204 --rc genhtml_function_coverage=1 00:15:20.204 --rc genhtml_legend=1 00:15:20.204 --rc geninfo_all_blocks=1 00:15:20.204 --rc geninfo_unexecuted_blocks=1 00:15:20.204 00:15:20.204 ' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.204 --rc genhtml_branch_coverage=1 00:15:20.204 --rc genhtml_function_coverage=1 00:15:20.204 --rc genhtml_legend=1 00:15:20.204 --rc geninfo_all_blocks=1 00:15:20.204 --rc geninfo_unexecuted_blocks=1 00:15:20.204 00:15:20.204 ' 00:15:20.204 11:11:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.204 --rc genhtml_branch_coverage=1 00:15:20.204 --rc genhtml_function_coverage=1 00:15:20.204 --rc genhtml_legend=1 00:15:20.204 --rc geninfo_all_blocks=1 00:15:20.204 --rc geninfo_unexecuted_blocks=1 00:15:20.204 00:15:20.204 ' 00:15:20.204 11:11:31 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.204 11:11:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.204 11:11:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.204 11:11:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.204 11:11:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.204 11:11:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.204 11:11:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- paths/export.sh@5 -- # export PATH 00:15:20.205 11:11:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.205 11:11:31 -- nvmf/common.sh@7 -- # uname -s 00:15:20.205 11:11:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.205 11:11:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.205 11:11:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.205 11:11:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.205 11:11:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.205 11:11:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.205 11:11:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.205 11:11:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.205 11:11:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.205 11:11:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:15:20.205 11:11:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:15:20.205 11:11:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.205 11:11:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.205 11:11:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.205 11:11:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.205 11:11:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.205 11:11:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.205 11:11:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.205 11:11:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- paths/export.sh@5 -- # export PATH 00:15:20.205 11:11:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.205 11:11:31 -- nvmf/common.sh@46 -- # : 0 00:15:20.205 11:11:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.205 11:11:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.205 11:11:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.205 11:11:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.205 11:11:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.205 11:11:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.205 11:11:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.205 11:11:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.205 11:11:31 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.205 11:11:31 -- host/fio.sh@14 -- # nvmftestinit 00:15:20.205 11:11:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.205 11:11:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.205 11:11:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.205 11:11:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.205 11:11:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.205 11:11:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.205 11:11:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.205 11:11:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.205 11:11:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.205 11:11:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.205 11:11:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.205 11:11:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.205 11:11:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.205 11:11:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.205 11:11:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.205 11:11:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.205 11:11:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.205 11:11:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.205 11:11:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.205 11:11:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.205 11:11:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.205 11:11:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.205 11:11:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.205 11:11:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.205 Cannot find device "nvmf_tgt_br" 00:15:20.205 11:11:31 -- nvmf/common.sh@154 -- # true 00:15:20.205 11:11:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.205 Cannot find device "nvmf_tgt_br2" 00:15:20.205 11:11:31 -- nvmf/common.sh@155 -- # true 00:15:20.205 11:11:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.205 11:11:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.464 Cannot find device "nvmf_tgt_br" 00:15:20.464 11:11:31 -- nvmf/common.sh@157 -- # true 00:15:20.464 11:11:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.464 Cannot find device "nvmf_tgt_br2" 00:15:20.464 11:11:31 -- nvmf/common.sh@158 -- # true 00:15:20.464 11:11:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.464 11:11:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.464 11:11:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.464 11:11:31 -- nvmf/common.sh@161 -- # true 00:15:20.464 11:11:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.464 11:11:31 -- nvmf/common.sh@162 -- # true 00:15:20.464 11:11:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.464 11:11:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.464 11:11:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.464 11:11:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.464 11:11:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.464 11:11:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.464 11:11:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.464 11:11:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.464 11:11:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.464 11:11:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.464 11:11:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.464 11:11:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.464 11:11:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.464 11:11:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.464 11:11:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.464 11:11:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.464 11:11:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.464 11:11:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.465 11:11:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.465 11:11:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.465 11:11:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.465 11:11:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.465 11:11:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.465 11:11:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:20.465 00:15:20.465 --- 10.0.0.2 ping statistics --- 00:15:20.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.465 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:20.465 11:11:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:20.465 00:15:20.465 --- 10.0.0.3 ping statistics --- 00:15:20.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.465 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:20.465 11:11:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:20.465 00:15:20.465 --- 10.0.0.1 ping statistics --- 00:15:20.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.465 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:20.465 11:11:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.465 11:11:31 -- nvmf/common.sh@421 -- # return 0 00:15:20.465 11:11:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.465 11:11:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.465 11:11:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.465 11:11:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.465 11:11:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.465 11:11:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.465 11:11:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.724 11:11:31 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:20.724 11:11:31 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:20.724 11:11:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.724 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:15:20.724 11:11:31 -- host/fio.sh@24 -- # nvmfpid=81369 00:15:20.724 11:11:31 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.724 11:11:31 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.724 11:11:31 -- host/fio.sh@28 -- # waitforlisten 81369 00:15:20.724 11:11:31 -- common/autotest_common.sh@829 -- # '[' -z 81369 ']' 00:15:20.724 11:11:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.724 11:11:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.724 11:11:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.724 11:11:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.724 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:15:20.724 [2024-12-06 11:11:31.673019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.724 [2024-12-06 11:11:31.673111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.724 [2024-12-06 11:11:31.816627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.724 [2024-12-06 11:11:31.857028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.724 [2024-12-06 11:11:31.857433] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.724 [2024-12-06 11:11:31.857615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.724 [2024-12-06 11:11:31.857785] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.724 [2024-12-06 11:11:31.858053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.724 [2024-12-06 11:11:31.858162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.724 [2024-12-06 11:11:31.858234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.724 [2024-12-06 11:11:31.858233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.658 11:11:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.658 11:11:32 -- common/autotest_common.sh@862 -- # return 0 00:15:21.658 11:11:32 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:21.916 [2024-12-06 11:11:32.928282] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.916 11:11:32 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:21.916 11:11:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.916 11:11:32 -- common/autotest_common.sh@10 -- # set +x 00:15:21.916 11:11:32 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.175 Malloc1 00:15:22.175 11:11:33 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.433 11:11:33 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.691 11:11:33 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.950 [2024-12-06 11:11:33.973053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.950 11:11:33 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.209 11:11:34 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:23.209 11:11:34 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.209 11:11:34 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.209 11:11:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:23.209 11:11:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.209 11:11:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:23.209 11:11:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.209 11:11:34 -- common/autotest_common.sh@1330 -- # shift 00:15:23.209 11:11:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:23.209 11:11:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:23.209 11:11:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:23.209 11:11:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:23.209 11:11:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:23.209 11:11:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:23.209 11:11:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:23.209 11:11:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.466 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:23.466 fio-3.35 00:15:23.466 Starting 1 thread 00:15:25.997 00:15:25.997 test: (groupid=0, jobs=1): err= 0: pid=81452: Fri Dec 6 11:11:36 2024 00:15:25.997 read: IOPS=9495, BW=37.1MiB/s (38.9MB/s)(74.4MiB/2006msec) 00:15:25.997 slat (nsec): min=1836, max=407232, avg=2462.44, stdev=3917.24 00:15:25.997 clat (usec): min=2671, max=12508, avg=7017.68, stdev=575.31 00:15:25.997 lat (usec): min=2716, max=12510, avg=7020.14, stdev=575.17 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:25.997 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:25.997 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:15:25.997 | 99.00th=[ 8356], 99.50th=[ 8717], 99.90th=[10945], 99.95th=[11731], 00:15:25.997 | 99.99th=[12518] 00:15:25.997 bw ( KiB/s): min=37328, max=38392, per=99.97%, avg=37972.00, stdev=500.47, samples=4 00:15:25.997 iops : min= 9332, max= 9598, avg=9493.00, stdev=125.12, samples=4 00:15:25.997 write: IOPS=9504, BW=37.1MiB/s (38.9MB/s)(74.5MiB/2006msec); 0 zone resets 00:15:25.997 slat (nsec): min=1908, max=240198, avg=2554.09, stdev=2396.89 00:15:25.997 clat (usec): min=2538, max=12238, avg=6425.86, stdev=544.06 00:15:25.997 lat (usec): min=2550, max=12240, avg=6428.41, stdev=544.09 00:15:25.997 clat percentiles (usec): 00:15:25.997 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 5997], 00:15:25.997 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:15:25.997 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:15:25.997 | 99.00th=[ 7767], 99.50th=[ 8225], 99.90th=[10683], 99.95th=[11469], 00:15:25.997 | 99.99th=[12256] 00:15:25.997 bw ( KiB/s): min=37752, max=38312, per=99.96%, avg=38002.00, stdev=231.25, samples=4 00:15:25.997 iops : min= 9438, max= 9578, avg=9500.50, stdev=57.81, samples=4 00:15:25.997 lat (msec) : 4=0.19%, 10=99.65%, 20=0.16% 00:15:25.997 cpu : usr=71.52%, sys=20.20%, ctx=61, majf=0, minf=5 00:15:25.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:25.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.997 issued rwts: total=19048,19066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.998 00:15:25.998 Run status group 0 (all jobs): 00:15:25.998 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.4MiB (78.0MB), run=2006-2006msec 00:15:25.998 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.5MiB (78.1MB), run=2006-2006msec 00:15:25.998 11:11:36 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:25.998 11:11:36 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:25.998 11:11:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:25.998 11:11:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:25.998 11:11:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:25.998 11:11:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:25.998 11:11:36 -- common/autotest_common.sh@1330 -- # shift 00:15:25.998 11:11:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:25.998 11:11:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:25.998 11:11:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:25.998 11:11:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:25.998 11:11:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:25.998 11:11:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:25.998 11:11:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:25.998 11:11:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:25.998 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:25.998 fio-3.35 00:15:25.998 Starting 1 thread 00:15:28.528 00:15:28.528 test: (groupid=0, jobs=1): err= 0: pid=81501: Fri Dec 6 11:11:39 2024 00:15:28.528 read: IOPS=8742, BW=137MiB/s (143MB/s)(274MiB/2006msec) 00:15:28.528 slat (usec): min=2, max=139, avg= 3.73, stdev= 2.57 00:15:28.528 clat (usec): min=201, max=18582, avg=8132.56, stdev=2584.35 00:15:28.528 lat (usec): min=213, max=18586, avg=8136.29, stdev=2584.48 00:15:28.528 clat percentiles (usec): 00:15:28.528 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5866], 00:15:28.528 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:15:28.528 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[11600], 95.00th=[13173], 00:15:28.528 | 99.00th=[15008], 99.50th=[15401], 99.90th=[17695], 99.95th=[18220], 00:15:28.528 | 99.99th=[18482] 00:15:28.528 bw ( KiB/s): min=61760, max=79776, per=49.23%, avg=68856.00, stdev=7689.21, samples=4 00:15:28.528 iops : min= 3860, max= 4986, avg=4303.50, stdev=480.58, samples=4 00:15:28.528 write: IOPS=4900, BW=76.6MiB/s (80.3MB/s)(140MiB/1822msec); 0 zone resets 00:15:28.528 slat (usec): min=32, max=329, avg=38.41, stdev= 8.94 00:15:28.528 clat (usec): min=1876, max=18917, avg=11553.44, stdev=1847.19 00:15:28.528 lat (usec): min=1913, max=18951, avg=11591.85, stdev=1847.26 00:15:28.528 clat percentiles (usec): 00:15:28.528 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:15:28.528 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:15:28.528 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:15:28.528 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[17957], 00:15:28.528 | 99.99th=[19006] 00:15:28.528 bw ( KiB/s): min=64384, max=82592, per=91.10%, avg=71432.00, stdev=7806.84, samples=4 00:15:28.528 iops : min= 4024, max= 5162, avg=4464.50, stdev=487.93, samples=4 00:15:28.528 lat (usec) : 250=0.01% 00:15:28.528 lat (msec) : 2=0.03%, 4=0.85%, 10=56.79%, 20=42.33% 00:15:28.528 cpu : usr=80.85%, sys=14.21%, ctx=46, majf=0, minf=1 00:15:28.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:28.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:28.528 issued rwts: total=17537,8929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:28.528 00:15:28.528 Run status group 0 (all jobs): 00:15:28.528 READ: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=274MiB (287MB), run=2006-2006msec 00:15:28.528 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=140MiB (146MB), run=1822-1822msec 00:15:28.528 11:11:39 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.528 11:11:39 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:28.528 11:11:39 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:28.528 11:11:39 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:28.528 11:11:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:28.528 11:11:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:28.528 11:11:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:28.528 11:11:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:28.528 11:11:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:28.528 11:11:39 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:28.528 11:11:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:28.528 11:11:39 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:28.786 Nvme0n1 00:15:28.786 11:11:39 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:29.045 11:11:40 -- host/fio.sh@53 -- # ls_guid=d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb 00:15:29.045 11:11:40 -- host/fio.sh@54 -- # get_lvs_free_mb d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb 00:15:29.045 11:11:40 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb 00:15:29.045 11:11:40 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:29.045 11:11:40 -- common/autotest_common.sh@1355 -- # local fc 00:15:29.045 11:11:40 -- common/autotest_common.sh@1356 -- # local cs 00:15:29.045 11:11:40 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:29.330 11:11:40 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:29.330 { 00:15:29.330 "uuid": "d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb", 00:15:29.330 "name": "lvs_0", 00:15:29.330 "base_bdev": "Nvme0n1", 00:15:29.330 "total_data_clusters": 4, 00:15:29.330 "free_clusters": 4, 00:15:29.330 "block_size": 4096, 00:15:29.330 "cluster_size": 1073741824 00:15:29.330 } 00:15:29.330 ]' 00:15:29.330 11:11:40 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb") .free_clusters' 00:15:29.330 11:11:40 -- common/autotest_common.sh@1358 -- # fc=4 00:15:29.330 11:11:40 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb") .cluster_size' 00:15:29.330 4096 00:15:29.330 11:11:40 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:29.330 11:11:40 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:29.330 11:11:40 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:29.330 11:11:40 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:29.614 b218c484-9370-43f7-b40a-53f9b789c0c7 00:15:29.614 11:11:40 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:29.872 11:11:40 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:30.129 11:11:41 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:30.388 11:11:41 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.388 11:11:41 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.388 11:11:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:30.388 11:11:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.388 11:11:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:30.388 11:11:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.388 11:11:41 -- common/autotest_common.sh@1330 -- # shift 00:15:30.388 11:11:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:30.388 11:11:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:30.388 11:11:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:30.388 11:11:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:30.388 11:11:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:30.388 11:11:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:30.388 11:11:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:30.388 11:11:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.388 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:30.388 fio-3.35 00:15:30.388 Starting 1 thread 00:15:32.917 00:15:32.917 test: (groupid=0, jobs=1): err= 0: pid=81605: Fri Dec 6 11:11:43 2024 00:15:32.917 read: IOPS=6377, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2007msec) 00:15:32.917 slat (usec): min=2, max=368, avg= 2.82, stdev= 4.23 00:15:32.917 clat (usec): min=3047, max=17558, avg=10469.06, stdev=882.70 00:15:32.917 lat (usec): min=3057, max=17561, avg=10471.88, stdev=882.34 00:15:32.917 clat percentiles (usec): 00:15:32.917 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:15:32.917 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:15:32.917 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:15:32.917 | 99.00th=[12518], 99.50th=[12780], 99.90th=[16450], 99.95th=[16712], 00:15:32.917 | 99.99th=[17433] 00:15:32.917 bw ( KiB/s): min=24360, max=25968, per=99.84%, avg=25468.00, stdev=758.79, samples=4 00:15:32.917 iops : min= 6090, max= 6492, avg=6367.00, stdev=189.70, samples=4 00:15:32.917 write: IOPS=6377, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2007msec); 0 zone resets 00:15:32.917 slat (usec): min=2, max=244, avg= 2.96, stdev= 2.76 00:15:32.917 clat (usec): min=2493, max=17257, avg=9508.99, stdev=822.49 00:15:32.917 lat (usec): min=2506, max=17260, avg=9511.94, stdev=822.38 00:15:32.917 clat percentiles (usec): 00:15:32.917 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:15:32.917 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:15:32.917 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10814], 00:15:32.917 | 99.00th=[11338], 99.50th=[11731], 99.90th=[14484], 99.95th=[15664], 00:15:32.917 | 99.99th=[16581] 00:15:32.917 bw ( KiB/s): min=25344, max=25648, per=99.86%, avg=25474.00, stdev=130.05, samples=4 00:15:32.917 iops : min= 6336, max= 6412, avg=6368.50, stdev=32.51, samples=4 00:15:32.917 lat (msec) : 4=0.06%, 10=51.15%, 20=48.79% 00:15:32.917 cpu : usr=72.18%, sys=21.29%, ctx=25, majf=0, minf=14 00:15:32.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:32.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.917 issued rwts: total=12799,12799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.917 00:15:32.917 Run status group 0 (all jobs): 00:15:32.917 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2007-2007msec 00:15:32.917 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2007-2007msec 00:15:32.917 11:11:43 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:33.175 11:11:44 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:33.432 11:11:44 -- host/fio.sh@64 -- # ls_nested_guid=20548a3e-b6c6-4393-a911-b05b938e2602 00:15:33.432 11:11:44 -- host/fio.sh@65 -- # get_lvs_free_mb 20548a3e-b6c6-4393-a911-b05b938e2602 00:15:33.432 11:11:44 -- common/autotest_common.sh@1353 -- # local lvs_uuid=20548a3e-b6c6-4393-a911-b05b938e2602 00:15:33.432 11:11:44 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:33.432 11:11:44 -- common/autotest_common.sh@1355 -- # local fc 00:15:33.432 11:11:44 -- common/autotest_common.sh@1356 -- # local cs 00:15:33.432 11:11:44 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:33.691 11:11:44 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:33.691 { 00:15:33.691 "uuid": "d2ad65cf-7c89-4bf4-8fc8-40fb14f070bb", 00:15:33.691 "name": "lvs_0", 00:15:33.691 "base_bdev": "Nvme0n1", 00:15:33.691 "total_data_clusters": 4, 00:15:33.691 "free_clusters": 0, 00:15:33.691 "block_size": 4096, 00:15:33.691 "cluster_size": 1073741824 00:15:33.691 }, 00:15:33.691 { 00:15:33.691 "uuid": "20548a3e-b6c6-4393-a911-b05b938e2602", 00:15:33.691 "name": "lvs_n_0", 00:15:33.691 "base_bdev": "b218c484-9370-43f7-b40a-53f9b789c0c7", 00:15:33.691 "total_data_clusters": 1022, 00:15:33.691 "free_clusters": 1022, 00:15:33.691 "block_size": 4096, 00:15:33.691 "cluster_size": 4194304 00:15:33.691 } 00:15:33.691 ]' 00:15:33.691 11:11:44 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="20548a3e-b6c6-4393-a911-b05b938e2602") .free_clusters' 00:15:33.691 11:11:44 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:33.691 11:11:44 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="20548a3e-b6c6-4393-a911-b05b938e2602") .cluster_size' 00:15:33.691 4088 00:15:33.691 11:11:44 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:33.691 11:11:44 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:33.691 11:11:44 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:33.691 11:11:44 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:33.950 efdce71e-eff1-465d-b107-3af1f3386a60 00:15:33.950 11:11:44 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:34.213 11:11:45 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:34.471 11:11:45 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:34.729 11:11:45 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.729 11:11:45 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.729 11:11:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:34.729 11:11:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.729 11:11:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:34.729 11:11:45 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.729 11:11:45 -- common/autotest_common.sh@1330 -- # shift 00:15:34.729 11:11:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:34.729 11:11:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.729 11:11:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.729 11:11:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:34.729 11:11:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:34.987 11:11:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:34.987 11:11:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:34.987 11:11:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.987 11:11:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:34.987 11:11:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.987 11:11:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:34.987 11:11:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:34.987 11:11:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:34.987 11:11:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:34.987 11:11:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.987 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:34.987 fio-3.35 00:15:34.987 Starting 1 thread 00:15:37.538 00:15:37.538 test: (groupid=0, jobs=1): err= 0: pid=81689: Fri Dec 6 11:11:48 2024 00:15:37.538 read: IOPS=5787, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec) 00:15:37.538 slat (usec): min=2, max=333, avg= 2.76, stdev= 4.04 00:15:37.538 clat (usec): min=3258, max=21088, avg=11566.38, stdev=980.46 00:15:37.538 lat (usec): min=3267, max=21090, avg=11569.14, stdev=980.14 00:15:37.538 clat percentiles (usec): 00:15:37.538 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:15:37.538 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:15:37.538 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:15:37.538 | 99.00th=[13698], 99.50th=[14091], 99.90th=[18482], 99.95th=[20317], 00:15:37.538 | 99.99th=[21103] 00:15:37.538 bw ( KiB/s): min=22168, max=23528, per=99.82%, avg=23110.00, stdev=632.37, samples=4 00:15:37.538 iops : min= 5542, max= 5882, avg=5777.50, stdev=158.09, samples=4 00:15:37.538 write: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2009msec); 0 zone resets 00:15:37.538 slat (usec): min=2, max=296, avg= 2.90, stdev= 3.32 00:15:37.538 clat (usec): min=2518, max=20031, avg=10475.46, stdev=908.64 00:15:37.538 lat (usec): min=2531, max=20033, avg=10478.36, stdev=908.47 00:15:37.538 clat percentiles (usec): 00:15:37.538 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:15:37.538 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:15:37.538 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:15:37.538 | 99.00th=[12518], 99.50th=[12780], 99.90th=[16581], 99.95th=[18220], 00:15:37.538 | 99.99th=[20055] 00:15:37.538 bw ( KiB/s): min=22912, max=23168, per=99.98%, avg=23074.00, stdev=111.69, samples=4 00:15:37.538 iops : min= 5728, max= 5792, avg=5768.50, stdev=27.92, samples=4 00:15:37.538 lat (msec) : 4=0.05%, 10=15.87%, 20=84.03%, 50=0.04% 00:15:37.538 cpu : usr=72.61%, sys=21.71%, ctx=7, majf=0, minf=14 00:15:37.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:37.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.538 issued rwts: total=11628,11591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.538 00:15:37.538 Run status group 0 (all jobs): 00:15:37.538 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:15:37.538 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:15:37.538 11:11:48 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:37.538 11:11:48 -- host/fio.sh@74 -- # sync 00:15:37.538 11:11:48 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:37.797 11:11:48 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:38.056 11:11:49 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:38.323 11:11:49 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:38.582 11:11:49 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:39.515 11:11:50 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:39.515 11:11:50 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:39.515 11:11:50 -- host/fio.sh@86 -- # nvmftestfini 00:15:39.515 11:11:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.515 11:11:50 -- nvmf/common.sh@116 -- # sync 00:15:39.515 11:11:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:39.515 11:11:50 -- nvmf/common.sh@119 -- # set +e 00:15:39.515 11:11:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.515 11:11:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:39.515 rmmod nvme_tcp 00:15:39.515 rmmod nvme_fabrics 00:15:39.515 rmmod nvme_keyring 00:15:39.515 11:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.515 11:11:50 -- nvmf/common.sh@123 -- # set -e 00:15:39.515 11:11:50 -- nvmf/common.sh@124 -- # return 0 00:15:39.515 11:11:50 -- nvmf/common.sh@477 -- # '[' -n 81369 ']' 00:15:39.515 11:11:50 -- nvmf/common.sh@478 -- # killprocess 81369 00:15:39.515 11:11:50 -- common/autotest_common.sh@936 -- # '[' -z 81369 ']' 00:15:39.515 11:11:50 -- common/autotest_common.sh@940 -- # kill -0 81369 00:15:39.515 11:11:50 -- common/autotest_common.sh@941 -- # uname 00:15:39.515 11:11:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.515 11:11:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81369 00:15:39.515 11:11:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.515 11:11:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.515 11:11:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81369' 00:15:39.515 killing process with pid 81369 00:15:39.515 11:11:50 -- common/autotest_common.sh@955 -- # kill 81369 00:15:39.515 11:11:50 -- common/autotest_common.sh@960 -- # wait 81369 00:15:39.775 11:11:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.775 11:11:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.775 11:11:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.775 11:11:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.775 11:11:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.775 11:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.775 11:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.775 11:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.775 11:11:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.775 00:15:39.775 real 0m19.717s 00:15:39.775 user 1m26.753s 00:15:39.775 sys 0m4.396s 00:15:39.775 11:11:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.775 11:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:39.775 ************************************ 00:15:39.775 END TEST nvmf_fio_host 00:15:39.775 ************************************ 00:15:39.775 11:11:50 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:39.775 11:11:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.775 11:11:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.775 11:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:39.775 ************************************ 00:15:39.775 START TEST nvmf_failover 00:15:39.775 ************************************ 00:15:39.775 11:11:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:40.035 * Looking for test storage... 00:15:40.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:40.035 11:11:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:40.035 11:11:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:40.035 11:11:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:40.035 11:11:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:40.035 11:11:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:40.035 11:11:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:40.035 11:11:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:40.035 11:11:51 -- scripts/common.sh@335 -- # IFS=.-: 00:15:40.035 11:11:51 -- scripts/common.sh@335 -- # read -ra ver1 00:15:40.035 11:11:51 -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.035 11:11:51 -- scripts/common.sh@336 -- # read -ra ver2 00:15:40.035 11:11:51 -- scripts/common.sh@337 -- # local 'op=<' 00:15:40.035 11:11:51 -- scripts/common.sh@339 -- # ver1_l=2 00:15:40.035 11:11:51 -- scripts/common.sh@340 -- # ver2_l=1 00:15:40.035 11:11:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:40.035 11:11:51 -- scripts/common.sh@343 -- # case "$op" in 00:15:40.035 11:11:51 -- scripts/common.sh@344 -- # : 1 00:15:40.035 11:11:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:40.035 11:11:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.035 11:11:51 -- scripts/common.sh@364 -- # decimal 1 00:15:40.035 11:11:51 -- scripts/common.sh@352 -- # local d=1 00:15:40.035 11:11:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.035 11:11:51 -- scripts/common.sh@354 -- # echo 1 00:15:40.035 11:11:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:40.035 11:11:51 -- scripts/common.sh@365 -- # decimal 2 00:15:40.035 11:11:51 -- scripts/common.sh@352 -- # local d=2 00:15:40.035 11:11:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.035 11:11:51 -- scripts/common.sh@354 -- # echo 2 00:15:40.035 11:11:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:40.035 11:11:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:40.035 11:11:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:40.035 11:11:51 -- scripts/common.sh@367 -- # return 0 00:15:40.035 11:11:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.035 11:11:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.035 --rc genhtml_branch_coverage=1 00:15:40.035 --rc genhtml_function_coverage=1 00:15:40.035 --rc genhtml_legend=1 00:15:40.035 --rc geninfo_all_blocks=1 00:15:40.035 --rc geninfo_unexecuted_blocks=1 00:15:40.035 00:15:40.035 ' 00:15:40.035 11:11:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:40.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.035 --rc genhtml_branch_coverage=1 00:15:40.035 --rc genhtml_function_coverage=1 00:15:40.035 --rc genhtml_legend=1 00:15:40.035 --rc geninfo_all_blocks=1 00:15:40.035 --rc geninfo_unexecuted_blocks=1 00:15:40.035 00:15:40.035 ' 00:15:40.035 11:11:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:40.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.036 --rc genhtml_branch_coverage=1 00:15:40.036 --rc genhtml_function_coverage=1 00:15:40.036 --rc genhtml_legend=1 00:15:40.036 --rc geninfo_all_blocks=1 00:15:40.036 --rc geninfo_unexecuted_blocks=1 00:15:40.036 00:15:40.036 ' 00:15:40.036 11:11:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:40.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.036 --rc genhtml_branch_coverage=1 00:15:40.036 --rc genhtml_function_coverage=1 00:15:40.036 --rc genhtml_legend=1 00:15:40.036 --rc geninfo_all_blocks=1 00:15:40.036 --rc geninfo_unexecuted_blocks=1 00:15:40.036 00:15:40.036 ' 00:15:40.036 11:11:51 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.036 11:11:51 -- nvmf/common.sh@7 -- # uname -s 00:15:40.036 11:11:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.036 11:11:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.036 11:11:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.036 11:11:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.036 11:11:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.036 11:11:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.036 11:11:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.036 11:11:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.036 11:11:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.036 11:11:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:15:40.036 11:11:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:15:40.036 11:11:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.036 11:11:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.036 11:11:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.036 11:11:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.036 11:11:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.036 11:11:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.036 11:11:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.036 11:11:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.036 11:11:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.036 11:11:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.036 11:11:51 -- paths/export.sh@5 -- # export PATH 00:15:40.036 11:11:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.036 11:11:51 -- nvmf/common.sh@46 -- # : 0 00:15:40.036 11:11:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.036 11:11:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.036 11:11:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.036 11:11:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.036 11:11:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.036 11:11:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.036 11:11:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.036 11:11:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.036 11:11:51 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.036 11:11:51 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.036 11:11:51 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.036 11:11:51 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:40.036 11:11:51 -- host/failover.sh@18 -- # nvmftestinit 00:15:40.036 11:11:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:40.036 11:11:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.036 11:11:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.036 11:11:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.036 11:11:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.036 11:11:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.036 11:11:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.036 11:11:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.036 11:11:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:40.036 11:11:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:40.036 11:11:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.036 11:11:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.036 11:11:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.036 11:11:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:40.036 11:11:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.036 11:11:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.036 11:11:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.036 11:11:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.036 11:11:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.036 11:11:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.036 11:11:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.036 11:11:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.036 11:11:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:40.036 11:11:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:40.036 Cannot find device "nvmf_tgt_br" 00:15:40.036 11:11:51 -- nvmf/common.sh@154 -- # true 00:15:40.036 11:11:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.036 Cannot find device "nvmf_tgt_br2" 00:15:40.036 11:11:51 -- nvmf/common.sh@155 -- # true 00:15:40.036 11:11:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:40.036 11:11:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:40.036 Cannot find device "nvmf_tgt_br" 00:15:40.036 11:11:51 -- nvmf/common.sh@157 -- # true 00:15:40.036 11:11:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:40.036 Cannot find device "nvmf_tgt_br2" 00:15:40.036 11:11:51 -- nvmf/common.sh@158 -- # true 00:15:40.036 11:11:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:40.295 11:11:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:40.295 11:11:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.295 11:11:51 -- nvmf/common.sh@161 -- # true 00:15:40.295 11:11:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.295 11:11:51 -- nvmf/common.sh@162 -- # true 00:15:40.295 11:11:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.295 11:11:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.295 11:11:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.295 11:11:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.295 11:11:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.295 11:11:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.295 11:11:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.295 11:11:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.295 11:11:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.295 11:11:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:40.295 11:11:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:40.295 11:11:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:40.295 11:11:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:40.295 11:11:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.295 11:11:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.295 11:11:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.295 11:11:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:40.295 11:11:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:40.295 11:11:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.295 11:11:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.295 11:11:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.295 11:11:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.295 11:11:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.295 11:11:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:40.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:40.295 00:15:40.295 --- 10.0.0.2 ping statistics --- 00:15:40.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.295 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:40.295 11:11:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:40.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:40.295 00:15:40.295 --- 10.0.0.3 ping statistics --- 00:15:40.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.295 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:40.295 11:11:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:40.295 00:15:40.295 --- 10.0.0.1 ping statistics --- 00:15:40.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.295 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:40.295 11:11:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.295 11:11:51 -- nvmf/common.sh@421 -- # return 0 00:15:40.295 11:11:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:40.295 11:11:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.295 11:11:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:40.295 11:11:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:40.295 11:11:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.295 11:11:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:40.295 11:11:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:40.295 11:11:51 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:40.295 11:11:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:40.295 11:11:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.295 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:40.295 11:11:51 -- nvmf/common.sh@469 -- # nvmfpid=81934 00:15:40.295 11:11:51 -- nvmf/common.sh@470 -- # waitforlisten 81934 00:15:40.295 11:11:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.295 11:11:51 -- common/autotest_common.sh@829 -- # '[' -z 81934 ']' 00:15:40.295 11:11:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.295 11:11:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.295 11:11:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.295 11:11:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.295 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:40.553 [2024-12-06 11:11:51.466443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:40.553 [2024-12-06 11:11:51.466725] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.553 [2024-12-06 11:11:51.598816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.553 [2024-12-06 11:11:51.631790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.553 [2024-12-06 11:11:51.632217] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.553 [2024-12-06 11:11:51.632337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.553 [2024-12-06 11:11:51.632465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.553 [2024-12-06 11:11:51.632851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.553 [2024-12-06 11:11:51.632934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.553 [2024-12-06 11:11:51.632939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.812 11:11:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.812 11:11:51 -- common/autotest_common.sh@862 -- # return 0 00:15:40.812 11:11:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.812 11:11:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.812 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 11:11:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.812 11:11:51 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.071 [2024-12-06 11:11:52.015592] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.071 11:11:52 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:41.330 Malloc0 00:15:41.330 11:11:52 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.589 11:11:52 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.847 11:11:52 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.106 [2024-12-06 11:11:53.082666] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.106 11:11:53 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:42.365 [2024-12-06 11:11:53.314906] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:42.365 11:11:53 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:42.624 [2024-12-06 11:11:53.551127] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:42.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.624 11:11:53 -- host/failover.sh@31 -- # bdevperf_pid=81984 00:15:42.624 11:11:53 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:42.624 11:11:53 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.624 11:11:53 -- host/failover.sh@34 -- # waitforlisten 81984 /var/tmp/bdevperf.sock 00:15:42.624 11:11:53 -- common/autotest_common.sh@829 -- # '[' -z 81984 ']' 00:15:42.624 11:11:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.624 11:11:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.624 11:11:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.624 11:11:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.624 11:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 11:11:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.571 11:11:54 -- common/autotest_common.sh@862 -- # return 0 00:15:43.571 11:11:54 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:43.829 NVMe0n1 00:15:43.829 11:11:54 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.088 00:15:44.088 11:11:55 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.088 11:11:55 -- host/failover.sh@39 -- # run_test_pid=82007 00:15:44.088 11:11:55 -- host/failover.sh@41 -- # sleep 1 00:15:45.460 11:11:56 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.460 [2024-12-06 11:11:56.444076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 [2024-12-06 11:11:56.444282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703240 is same with the state(5) to be set 00:15:45.460 11:11:56 -- host/failover.sh@45 -- # sleep 3 00:15:48.742 11:11:59 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.742 00:15:48.742 11:11:59 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:49.001 [2024-12-06 11:12:00.085853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e50 is same with the state(5) to be set 00:15:49.001 [2024-12-06 11:12:00.085911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e50 is same with the state(5) to be set 00:15:49.001 [2024-12-06 11:12:00.085940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e50 is same with the state(5) to be set 00:15:49.001 [2024-12-06 11:12:00.085948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e50 is same with the state(5) to be set 00:15:49.001 11:12:00 -- host/failover.sh@50 -- # sleep 3 00:15:52.284 11:12:03 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.284 [2024-12-06 11:12:03.354498] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.284 11:12:03 -- host/failover.sh@55 -- # sleep 1 00:15:53.660 11:12:04 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:53.660 [2024-12-06 11:12:04.603967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 [2024-12-06 11:12:04.604308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7550 is same with the state(5) to be set 00:15:53.660 11:12:04 -- host/failover.sh@59 -- # wait 82007 00:16:00.264 0 00:16:00.264 11:12:10 -- host/failover.sh@61 -- # killprocess 81984 00:16:00.264 11:12:10 -- common/autotest_common.sh@936 -- # '[' -z 81984 ']' 00:16:00.264 11:12:10 -- common/autotest_common.sh@940 -- # kill -0 81984 00:16:00.264 11:12:10 -- common/autotest_common.sh@941 -- # uname 00:16:00.264 11:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.264 11:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81984 00:16:00.264 11:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:00.264 11:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:00.264 11:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81984' 00:16:00.264 killing process with pid 81984 00:16:00.264 11:12:10 -- common/autotest_common.sh@955 -- # kill 81984 00:16:00.264 11:12:10 -- common/autotest_common.sh@960 -- # wait 81984 00:16:00.264 11:12:10 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:00.264 [2024-12-06 11:11:53.614645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.264 [2024-12-06 11:11:53.614760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81984 ] 00:16:00.264 [2024-12-06 11:11:53.749528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.264 [2024-12-06 11:11:53.784778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.264 Running I/O for 15 seconds... 00:16:00.264 [2024-12-06 11:11:56.444337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.444967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.444999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.264 [2024-12-06 11:11:56.445072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.264 [2024-12-06 11:11:56.445101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.264 [2024-12-06 11:11:56.445130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.264 [2024-12-06 11:11:56.445268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.264 [2024-12-06 11:11:56.445310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.264 [2024-12-06 11:11:56.445515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.264 [2024-12-06 11:11:56.445529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.445952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.445967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.445987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.446955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.446983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.446998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.265 [2024-12-06 11:11:56.447910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.265 [2024-12-06 11:11:56.447926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.265 [2024-12-06 11:11:56.447939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.447954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.447976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.447992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:11:56.448211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:11:56.448410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca87e0 is same with the state(5) to be set 00:16:00.266 [2024-12-06 11:11:56.448441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.266 [2024-12-06 11:11:56.448452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.266 [2024-12-06 11:11:56.448462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:16:00.266 [2024-12-06 11:11:56.448475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448518] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca87e0 was disconnected and freed. reset controller. 00:16:00.266 [2024-12-06 11:11:56.448536] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:00.266 [2024-12-06 11:11:56.448603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.266 [2024-12-06 11:11:56.448637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.266 [2024-12-06 11:11:56.448666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.266 [2024-12-06 11:11:56.448693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.266 [2024-12-06 11:11:56.448720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:11:56.448733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.266 [2024-12-06 11:11:56.448788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cab820 (9): Bad file descriptor 00:16:00.266 [2024-12-06 11:11:56.450946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.266 [2024-12-06 11:11:56.484605] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.266 [2024-12-06 11:12:00.086006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.086947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.086979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.086992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.087038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.087100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.266 [2024-12-06 11:12:00.087295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.266 [2024-12-06 11:12:00.087552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.266 [2024-12-06 11:12:00.087570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.087782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.087886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.087916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.087946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.087976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.087992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.088929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.088973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.088987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.267 [2024-12-06 11:12:00.089959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.089975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.090004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.090018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.267 [2024-12-06 11:12:00.090033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.267 [2024-12-06 11:12:00.090047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:00.090238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca9440 is same with the state(5) to be set 00:16:00.268 [2024-12-06 11:12:00.090279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.268 [2024-12-06 11:12:00.090290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.268 [2024-12-06 11:12:00.090301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123040 len:8 PRP1 0x0 PRP2 0x0 00:16:00.268 [2024-12-06 11:12:00.090315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090362] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca9440 was disconnected and freed. reset controller. 00:16:00.268 [2024-12-06 11:12:00.090381] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:00.268 [2024-12-06 11:12:00.090435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.268 [2024-12-06 11:12:00.090457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.268 [2024-12-06 11:12:00.090487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.268 [2024-12-06 11:12:00.090530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.268 [2024-12-06 11:12:00.090563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:00.090576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.268 [2024-12-06 11:12:00.090624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cab820 (9): Bad file descriptor 00:16:00.268 [2024-12-06 11:12:00.093123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.268 [2024-12-06 11:12:00.126474] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.268 [2024-12-06 11:12:04.604370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.604885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.604928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.604954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.604968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.604992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.605933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.605978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.605992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.268 [2024-12-06 11:12:04.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.606301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.606329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.268 [2024-12-06 11:12:04.606344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.268 [2024-12-06 11:12:04.606357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.606747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.606981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.606994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.607975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.608019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.269 [2024-12-06 11:12:04.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.269 [2024-12-06 11:12:04.608408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccfb00 is same with the state(5) to be set 00:16:00.269 [2024-12-06 11:12:04.608440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.269 [2024-12-06 11:12:04.608452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.269 [2024-12-06 11:12:04.608462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:16:00.269 [2024-12-06 11:12:04.608475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608519] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ccfb00 was disconnected and freed. reset controller. 00:16:00.269 [2024-12-06 11:12:04.608537] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:00.269 [2024-12-06 11:12:04.608627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.269 [2024-12-06 11:12:04.608650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.269 [2024-12-06 11:12:04.608680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.269 [2024-12-06 11:12:04.608695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.270 [2024-12-06 11:12:04.608708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.270 [2024-12-06 11:12:04.608722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.270 [2024-12-06 11:12:04.608736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.270 [2024-12-06 11:12:04.608750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.270 [2024-12-06 11:12:04.611452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.270 [2024-12-06 11:12:04.611493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cab820 (9): Bad file descriptor 00:16:00.270 [2024-12-06 11:12:04.648237] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.270 00:16:00.270 Latency(us) 00:16:00.270 [2024-12-06T11:12:11.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.270 [2024-12-06T11:12:11.417Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.270 Verification LBA range: start 0x0 length 0x4000 00:16:00.270 NVMe0n1 : 15.01 13152.80 51.38 342.02 0.00 9466.61 441.25 15252.01 00:16:00.270 [2024-12-06T11:12:11.417Z] =================================================================================================================== 00:16:00.270 [2024-12-06T11:12:11.417Z] Total : 13152.80 51.38 342.02 0.00 9466.61 441.25 15252.01 00:16:00.270 Received shutdown signal, test time was about 15.000000 seconds 00:16:00.270 00:16:00.270 Latency(us) 00:16:00.270 [2024-12-06T11:12:11.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.270 [2024-12-06T11:12:11.417Z] =================================================================================================================== 00:16:00.270 [2024-12-06T11:12:11.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.270 11:12:10 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:00.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.270 11:12:10 -- host/failover.sh@65 -- # count=3 00:16:00.270 11:12:10 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:00.270 11:12:10 -- host/failover.sh@73 -- # bdevperf_pid=82180 00:16:00.270 11:12:10 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:00.270 11:12:10 -- host/failover.sh@75 -- # waitforlisten 82180 /var/tmp/bdevperf.sock 00:16:00.270 11:12:10 -- common/autotest_common.sh@829 -- # '[' -z 82180 ']' 00:16:00.270 11:12:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.270 11:12:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.270 11:12:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.270 11:12:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.270 11:12:10 -- common/autotest_common.sh@10 -- # set +x 00:16:00.270 11:12:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.270 11:12:10 -- common/autotest_common.sh@862 -- # return 0 00:16:00.270 11:12:10 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:00.270 [2024-12-06 11:12:11.037368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:00.270 11:12:11 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:00.270 [2024-12-06 11:12:11.289140] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:00.270 11:12:11 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:00.526 NVMe0n1 00:16:00.526 11:12:11 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:00.783 00:16:01.040 11:12:11 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.298 00:16:01.298 11:12:12 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:01.298 11:12:12 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:01.556 11:12:12 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.814 11:12:12 -- host/failover.sh@87 -- # sleep 3 00:16:05.095 11:12:15 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:05.095 11:12:15 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:05.095 11:12:16 -- host/failover.sh@90 -- # run_test_pid=82255 00:16:05.095 11:12:16 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:05.095 11:12:16 -- host/failover.sh@92 -- # wait 82255 00:16:06.031 0 00:16:06.031 11:12:17 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:06.031 [2024-12-06 11:12:10.586779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:06.031 [2024-12-06 11:12:10.586882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82180 ] 00:16:06.031 [2024-12-06 11:12:10.723137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.031 [2024-12-06 11:12:10.757110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.031 [2024-12-06 11:12:12.709858] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:06.031 [2024-12-06 11:12:12.710427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.031 [2024-12-06 11:12:12.710536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.031 [2024-12-06 11:12:12.710698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.031 [2024-12-06 11:12:12.710787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.031 [2024-12-06 11:12:12.710865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.031 [2024-12-06 11:12:12.710939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.031 [2024-12-06 11:12:12.711023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.031 [2024-12-06 11:12:12.711094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.031 [2024-12-06 11:12:12.711160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:06.031 [2024-12-06 11:12:12.711319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:06.031 [2024-12-06 11:12:12.711425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc29820 (9): Bad file descriptor 00:16:06.031 [2024-12-06 11:12:12.719485] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:06.031 Running I/O for 1 seconds... 00:16:06.031 00:16:06.031 Latency(us) 00:16:06.031 [2024-12-06T11:12:17.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.031 [2024-12-06T11:12:17.178Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.031 Verification LBA range: start 0x0 length 0x4000 00:16:06.031 NVMe0n1 : 1.01 13041.63 50.94 0.00 0.00 9768.13 1005.38 14775.39 00:16:06.031 [2024-12-06T11:12:17.178Z] =================================================================================================================== 00:16:06.031 [2024-12-06T11:12:17.178Z] Total : 13041.63 50.94 0.00 0.00 9768.13 1005.38 14775.39 00:16:06.031 11:12:17 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:06.031 11:12:17 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.598 11:12:17 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.598 11:12:17 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:06.598 11:12:17 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.166 11:12:18 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.166 11:12:18 -- host/failover.sh@101 -- # sleep 3 00:16:10.454 11:12:21 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.454 11:12:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:10.454 11:12:21 -- host/failover.sh@108 -- # killprocess 82180 00:16:10.454 11:12:21 -- common/autotest_common.sh@936 -- # '[' -z 82180 ']' 00:16:10.454 11:12:21 -- common/autotest_common.sh@940 -- # kill -0 82180 00:16:10.454 11:12:21 -- common/autotest_common.sh@941 -- # uname 00:16:10.454 11:12:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:10.454 11:12:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82180 00:16:10.454 killing process with pid 82180 00:16:10.454 11:12:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:10.454 11:12:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:10.454 11:12:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82180' 00:16:10.454 11:12:21 -- common/autotest_common.sh@955 -- # kill 82180 00:16:10.454 11:12:21 -- common/autotest_common.sh@960 -- # wait 82180 00:16:10.712 11:12:21 -- host/failover.sh@110 -- # sync 00:16:10.712 11:12:21 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.969 11:12:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:10.969 11:12:22 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:10.969 11:12:22 -- host/failover.sh@116 -- # nvmftestfini 00:16:10.969 11:12:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.969 11:12:22 -- nvmf/common.sh@116 -- # sync 00:16:10.969 11:12:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.969 11:12:22 -- nvmf/common.sh@119 -- # set +e 00:16:10.969 11:12:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.969 11:12:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.969 rmmod nvme_tcp 00:16:10.969 rmmod nvme_fabrics 00:16:10.969 rmmod nvme_keyring 00:16:10.969 11:12:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:11.227 11:12:22 -- nvmf/common.sh@123 -- # set -e 00:16:11.227 11:12:22 -- nvmf/common.sh@124 -- # return 0 00:16:11.227 11:12:22 -- nvmf/common.sh@477 -- # '[' -n 81934 ']' 00:16:11.227 11:12:22 -- nvmf/common.sh@478 -- # killprocess 81934 00:16:11.227 11:12:22 -- common/autotest_common.sh@936 -- # '[' -z 81934 ']' 00:16:11.227 11:12:22 -- common/autotest_common.sh@940 -- # kill -0 81934 00:16:11.227 11:12:22 -- common/autotest_common.sh@941 -- # uname 00:16:11.227 11:12:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.227 11:12:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81934 00:16:11.227 killing process with pid 81934 00:16:11.227 11:12:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:11.227 11:12:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:11.227 11:12:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81934' 00:16:11.228 11:12:22 -- common/autotest_common.sh@955 -- # kill 81934 00:16:11.228 11:12:22 -- common/autotest_common.sh@960 -- # wait 81934 00:16:11.228 11:12:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.228 11:12:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.228 11:12:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.228 11:12:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.228 11:12:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.228 11:12:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.228 11:12:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.228 11:12:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.228 11:12:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:11.228 00:16:11.228 real 0m31.484s 00:16:11.228 user 2m2.273s 00:16:11.228 sys 0m5.413s 00:16:11.228 11:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:11.228 11:12:22 -- common/autotest_common.sh@10 -- # set +x 00:16:11.228 ************************************ 00:16:11.228 END TEST nvmf_failover 00:16:11.228 ************************************ 00:16:11.486 11:12:22 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:11.486 11:12:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.486 11:12:22 -- common/autotest_common.sh@10 -- # set +x 00:16:11.486 ************************************ 00:16:11.486 START TEST nvmf_discovery 00:16:11.486 ************************************ 00:16:11.486 11:12:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:11.486 * Looking for test storage... 00:16:11.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:11.486 11:12:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:11.486 11:12:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:11.486 11:12:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:11.486 11:12:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:11.486 11:12:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:11.486 11:12:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:11.486 11:12:22 -- scripts/common.sh@335 -- # IFS=.-: 00:16:11.486 11:12:22 -- scripts/common.sh@335 -- # read -ra ver1 00:16:11.486 11:12:22 -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.486 11:12:22 -- scripts/common.sh@336 -- # read -ra ver2 00:16:11.486 11:12:22 -- scripts/common.sh@337 -- # local 'op=<' 00:16:11.486 11:12:22 -- scripts/common.sh@339 -- # ver1_l=2 00:16:11.486 11:12:22 -- scripts/common.sh@340 -- # ver2_l=1 00:16:11.486 11:12:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:11.486 11:12:22 -- scripts/common.sh@343 -- # case "$op" in 00:16:11.486 11:12:22 -- scripts/common.sh@344 -- # : 1 00:16:11.486 11:12:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:11.486 11:12:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.486 11:12:22 -- scripts/common.sh@364 -- # decimal 1 00:16:11.486 11:12:22 -- scripts/common.sh@352 -- # local d=1 00:16:11.486 11:12:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.486 11:12:22 -- scripts/common.sh@354 -- # echo 1 00:16:11.486 11:12:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:11.486 11:12:22 -- scripts/common.sh@365 -- # decimal 2 00:16:11.486 11:12:22 -- scripts/common.sh@352 -- # local d=2 00:16:11.486 11:12:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.486 11:12:22 -- scripts/common.sh@354 -- # echo 2 00:16:11.486 11:12:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:11.486 11:12:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:11.486 11:12:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:11.486 11:12:22 -- scripts/common.sh@367 -- # return 0 00:16:11.486 11:12:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.486 --rc genhtml_branch_coverage=1 00:16:11.486 --rc genhtml_function_coverage=1 00:16:11.486 --rc genhtml_legend=1 00:16:11.486 --rc geninfo_all_blocks=1 00:16:11.486 --rc geninfo_unexecuted_blocks=1 00:16:11.486 00:16:11.486 ' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.486 --rc genhtml_branch_coverage=1 00:16:11.486 --rc genhtml_function_coverage=1 00:16:11.486 --rc genhtml_legend=1 00:16:11.486 --rc geninfo_all_blocks=1 00:16:11.486 --rc geninfo_unexecuted_blocks=1 00:16:11.486 00:16:11.486 ' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.486 --rc genhtml_branch_coverage=1 00:16:11.486 --rc genhtml_function_coverage=1 00:16:11.486 --rc genhtml_legend=1 00:16:11.486 --rc geninfo_all_blocks=1 00:16:11.486 --rc geninfo_unexecuted_blocks=1 00:16:11.486 00:16:11.486 ' 00:16:11.486 11:12:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.486 --rc genhtml_branch_coverage=1 00:16:11.486 --rc genhtml_function_coverage=1 00:16:11.486 --rc genhtml_legend=1 00:16:11.486 --rc geninfo_all_blocks=1 00:16:11.486 --rc geninfo_unexecuted_blocks=1 00:16:11.486 00:16:11.486 ' 00:16:11.486 11:12:22 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.486 11:12:22 -- nvmf/common.sh@7 -- # uname -s 00:16:11.486 11:12:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.486 11:12:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.486 11:12:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.486 11:12:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.486 11:12:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.486 11:12:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.486 11:12:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.486 11:12:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.486 11:12:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.486 11:12:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.486 11:12:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:11.486 11:12:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:11.486 11:12:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.486 11:12:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.486 11:12:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.486 11:12:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.486 11:12:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.486 11:12:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.486 11:12:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.486 11:12:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.486 11:12:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.487 11:12:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.487 11:12:22 -- paths/export.sh@5 -- # export PATH 00:16:11.487 11:12:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.487 11:12:22 -- nvmf/common.sh@46 -- # : 0 00:16:11.487 11:12:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.487 11:12:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.487 11:12:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.487 11:12:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.487 11:12:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.487 11:12:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.487 11:12:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.487 11:12:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.487 11:12:22 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:11.487 11:12:22 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:11.487 11:12:22 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:11.487 11:12:22 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:11.487 11:12:22 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:11.487 11:12:22 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:11.487 11:12:22 -- host/discovery.sh@25 -- # nvmftestinit 00:16:11.487 11:12:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.487 11:12:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.487 11:12:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.487 11:12:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.487 11:12:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.487 11:12:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.487 11:12:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.487 11:12:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.487 11:12:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:11.487 11:12:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:11.487 11:12:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:11.487 11:12:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:11.487 11:12:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:11.487 11:12:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:11.487 11:12:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.487 11:12:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.487 11:12:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.487 11:12:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:11.487 11:12:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.487 11:12:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.487 11:12:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.487 11:12:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.487 11:12:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.487 11:12:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.487 11:12:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.487 11:12:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.487 11:12:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:11.487 11:12:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:11.487 Cannot find device "nvmf_tgt_br" 00:16:11.487 11:12:22 -- nvmf/common.sh@154 -- # true 00:16:11.487 11:12:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.487 Cannot find device "nvmf_tgt_br2" 00:16:11.487 11:12:22 -- nvmf/common.sh@155 -- # true 00:16:11.487 11:12:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:11.487 11:12:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:11.744 Cannot find device "nvmf_tgt_br" 00:16:11.744 11:12:22 -- nvmf/common.sh@157 -- # true 00:16:11.744 11:12:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.744 Cannot find device "nvmf_tgt_br2" 00:16:11.744 11:12:22 -- nvmf/common.sh@158 -- # true 00:16:11.744 11:12:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.744 11:12:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.744 11:12:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.744 11:12:22 -- nvmf/common.sh@161 -- # true 00:16:11.745 11:12:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.745 11:12:22 -- nvmf/common.sh@162 -- # true 00:16:11.745 11:12:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.745 11:12:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.745 11:12:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.745 11:12:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.745 11:12:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.745 11:12:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.745 11:12:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.745 11:12:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.745 11:12:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.745 11:12:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.745 11:12:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.745 11:12:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.745 11:12:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.745 11:12:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.745 11:12:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.745 11:12:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.745 11:12:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.745 11:12:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.745 11:12:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.745 11:12:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.745 11:12:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.745 11:12:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.745 11:12:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.745 11:12:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:12.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:12.004 00:16:12.004 --- 10.0.0.2 ping statistics --- 00:16:12.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.004 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:12.004 11:12:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:12.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:12.004 00:16:12.004 --- 10.0.0.3 ping statistics --- 00:16:12.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.004 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:12.004 11:12:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:12.004 00:16:12.004 --- 10.0.0.1 ping statistics --- 00:16:12.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.004 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:12.004 11:12:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.004 11:12:22 -- nvmf/common.sh@421 -- # return 0 00:16:12.004 11:12:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:12.004 11:12:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.004 11:12:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:12.004 11:12:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:12.004 11:12:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.004 11:12:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:12.004 11:12:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:12.004 11:12:22 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:12.004 11:12:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:12.004 11:12:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.004 11:12:22 -- common/autotest_common.sh@10 -- # set +x 00:16:12.004 11:12:22 -- nvmf/common.sh@469 -- # nvmfpid=82533 00:16:12.004 11:12:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:12.004 11:12:22 -- nvmf/common.sh@470 -- # waitforlisten 82533 00:16:12.004 11:12:22 -- common/autotest_common.sh@829 -- # '[' -z 82533 ']' 00:16:12.004 11:12:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.004 11:12:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.004 11:12:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.004 11:12:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.004 11:12:22 -- common/autotest_common.sh@10 -- # set +x 00:16:12.004 [2024-12-06 11:12:22.979819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.004 [2024-12-06 11:12:22.979923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.004 [2024-12-06 11:12:23.120452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.262 [2024-12-06 11:12:23.153858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:12.262 [2024-12-06 11:12:23.154023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.262 [2024-12-06 11:12:23.154036] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.262 [2024-12-06 11:12:23.154044] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.262 [2024-12-06 11:12:23.154074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.262 11:12:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.262 11:12:23 -- common/autotest_common.sh@862 -- # return 0 00:16:12.262 11:12:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.262 11:12:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 11:12:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.262 11:12:23 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.262 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 [2024-12-06 11:12:23.269112] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.262 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.262 11:12:23 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:12.262 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 [2024-12-06 11:12:23.277252] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:12.262 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.262 11:12:23 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:12.262 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 null0 00:16:12.262 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.262 11:12:23 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:12.262 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 null1 00:16:12.262 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.262 11:12:23 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:12.262 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.262 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.262 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.262 11:12:23 -- host/discovery.sh@45 -- # hostpid=82552 00:16:12.262 11:12:23 -- host/discovery.sh@46 -- # waitforlisten 82552 /tmp/host.sock 00:16:12.262 11:12:23 -- common/autotest_common.sh@829 -- # '[' -z 82552 ']' 00:16:12.262 11:12:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:12.263 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:12.263 11:12:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.263 11:12:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:12.263 11:12:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.263 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.263 11:12:23 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:12.263 [2024-12-06 11:12:23.351242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.263 [2024-12-06 11:12:23.351352] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82552 ] 00:16:12.521 [2024-12-06 11:12:23.490046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.521 [2024-12-06 11:12:23.529423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:12.521 [2024-12-06 11:12:23.529639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.521 11:12:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.521 11:12:23 -- common/autotest_common.sh@862 -- # return 0 00:16:12.521 11:12:23 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:12.521 11:12:23 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:12.521 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.521 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.521 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.521 11:12:23 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:12.521 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.521 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.521 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.521 11:12:23 -- host/discovery.sh@72 -- # notify_id=0 00:16:12.521 11:12:23 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:12.521 11:12:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:12.521 11:12:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:12.521 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.521 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.521 11:12:23 -- host/discovery.sh@59 -- # sort 00:16:12.521 11:12:23 -- host/discovery.sh@59 -- # xargs 00:16:12.521 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:12.780 11:12:23 -- host/discovery.sh@79 -- # get_bdev_list 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # sort 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # xargs 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:12.780 11:12:23 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # sort 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # xargs 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:12.780 11:12:23 -- host/discovery.sh@83 -- # get_bdev_list 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # sort 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- host/discovery.sh@55 -- # xargs 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:12.780 11:12:23 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:12.780 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.780 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # sort 00:16:12.780 11:12:23 -- host/discovery.sh@59 -- # xargs 00:16:12.780 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.780 11:12:23 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:13.039 11:12:23 -- host/discovery.sh@87 -- # get_bdev_list 00:16:13.039 11:12:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:13.039 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 11:12:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:13.039 11:12:23 -- host/discovery.sh@55 -- # sort 00:16:13.039 11:12:23 -- host/discovery.sh@55 -- # xargs 00:16:13.039 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:23 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:13.039 11:12:23 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:13.039 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 [2024-12-06 11:12:23.985407] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.039 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:23 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:13.039 11:12:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:13.039 11:12:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:13.039 11:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:23 -- host/discovery.sh@59 -- # sort 00:16:13.039 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 11:12:23 -- host/discovery.sh@59 -- # xargs 00:16:13.039 11:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:24 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:13.039 11:12:24 -- host/discovery.sh@93 -- # get_bdev_list 00:16:13.039 11:12:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:13.039 11:12:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:13.039 11:12:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:24 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 11:12:24 -- host/discovery.sh@55 -- # sort 00:16:13.039 11:12:24 -- host/discovery.sh@55 -- # xargs 00:16:13.039 11:12:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:24 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:13.039 11:12:24 -- host/discovery.sh@94 -- # get_notification_count 00:16:13.039 11:12:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:13.039 11:12:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:24 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 11:12:24 -- host/discovery.sh@74 -- # jq '. | length' 00:16:13.039 11:12:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:24 -- host/discovery.sh@74 -- # notification_count=0 00:16:13.039 11:12:24 -- host/discovery.sh@75 -- # notify_id=0 00:16:13.039 11:12:24 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:24 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:13.039 11:12:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.039 11:12:24 -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 11:12:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.039 11:12:24 -- host/discovery.sh@100 -- # sleep 1 00:16:13.631 [2024-12-06 11:12:24.631899] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:13.631 [2024-12-06 11:12:24.631943] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:13.631 [2024-12-06 11:12:24.631961] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:13.631 [2024-12-06 11:12:24.637937] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:13.631 [2024-12-06 11:12:24.693529] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:13.631 [2024-12-06 11:12:24.693602] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:14.213 11:12:25 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:14.213 11:12:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:14.213 11:12:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:14.213 11:12:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 11:12:25 -- host/discovery.sh@59 -- # sort 00:16:14.213 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 11:12:25 -- host/discovery.sh@59 -- # xargs 00:16:14.213 11:12:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@102 -- # get_bdev_list 00:16:14.213 11:12:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.213 11:12:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 11:12:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:14.213 11:12:25 -- host/discovery.sh@55 -- # sort 00:16:14.213 11:12:25 -- host/discovery.sh@55 -- # xargs 00:16:14.213 11:12:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:14.213 11:12:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:14.213 11:12:25 -- host/discovery.sh@63 -- # sort -n 00:16:14.213 11:12:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:14.213 11:12:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 11:12:25 -- host/discovery.sh@63 -- # xargs 00:16:14.213 11:12:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:14.213 11:12:25 -- host/discovery.sh@104 -- # get_notification_count 00:16:14.213 11:12:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:14.213 11:12:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 11:12:25 -- host/discovery.sh@74 -- # jq '. | length' 00:16:14.213 11:12:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.470 11:12:25 -- host/discovery.sh@74 -- # notification_count=1 00:16:14.470 11:12:25 -- host/discovery.sh@75 -- # notify_id=1 00:16:14.470 11:12:25 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:14.470 11:12:25 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:14.470 11:12:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.470 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:14.470 11:12:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.470 11:12:25 -- host/discovery.sh@109 -- # sleep 1 00:16:15.406 11:12:26 -- host/discovery.sh@110 -- # get_bdev_list 00:16:15.406 11:12:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.406 11:12:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.406 11:12:26 -- host/discovery.sh@55 -- # sort 00:16:15.406 11:12:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.406 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:16:15.406 11:12:26 -- host/discovery.sh@55 -- # xargs 00:16:15.406 11:12:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.406 11:12:26 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:15.406 11:12:26 -- host/discovery.sh@111 -- # get_notification_count 00:16:15.406 11:12:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:15.406 11:12:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.406 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:16:15.406 11:12:26 -- host/discovery.sh@74 -- # jq '. | length' 00:16:15.406 11:12:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.406 11:12:26 -- host/discovery.sh@74 -- # notification_count=1 00:16:15.406 11:12:26 -- host/discovery.sh@75 -- # notify_id=2 00:16:15.406 11:12:26 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:15.406 11:12:26 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:15.406 11:12:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.406 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:16:15.406 [2024-12-06 11:12:26.516197] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:15.406 [2024-12-06 11:12:26.517255] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:15.406 [2024-12-06 11:12:26.517306] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:15.406 11:12:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.406 11:12:26 -- host/discovery.sh@117 -- # sleep 1 00:16:15.406 [2024-12-06 11:12:26.523253] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:15.664 [2024-12-06 11:12:26.585514] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:15.664 [2024-12-06 11:12:26.585566] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:15.664 [2024-12-06 11:12:26.585574] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:16.599 11:12:27 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:16.599 11:12:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.599 11:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.599 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 11:12:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.599 11:12:27 -- host/discovery.sh@59 -- # sort 00:16:16.599 11:12:27 -- host/discovery.sh@59 -- # xargs 00:16:16.599 11:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@119 -- # get_bdev_list 00:16:16.599 11:12:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.599 11:12:27 -- host/discovery.sh@55 -- # sort 00:16:16.599 11:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.599 11:12:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.599 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 11:12:27 -- host/discovery.sh@55 -- # xargs 00:16:16.599 11:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:16.599 11:12:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:16.599 11:12:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:16.599 11:12:27 -- host/discovery.sh@63 -- # sort -n 00:16:16.599 11:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.599 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 11:12:27 -- host/discovery.sh@63 -- # xargs 00:16:16.599 11:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@121 -- # get_notification_count 00:16:16.599 11:12:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:16.599 11:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.599 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 11:12:27 -- host/discovery.sh@74 -- # jq '. | length' 00:16:16.599 11:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@74 -- # notification_count=0 00:16:16.599 11:12:27 -- host/discovery.sh@75 -- # notify_id=2 00:16:16.599 11:12:27 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:16.599 11:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.599 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:16.599 [2024-12-06 11:12:27.734916] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:16.599 [2024-12-06 11:12:27.735001] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:16.599 11:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.599 11:12:27 -- host/discovery.sh@127 -- # sleep 1 00:16:16.599 [2024-12-06 11:12:27.740911] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:16.599 [2024-12-06 11:12:27.740961] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:16.599 [2024-12-06 11:12:27.741061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.599 [2024-12-06 11:12:27.741105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.599 [2024-12-06 11:12:27.741116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.599 [2024-12-06 11:12:27.741140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.599 [2024-12-06 11:12:27.741165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.599 [2024-12-06 11:12:27.741190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.599 [2024-12-06 11:12:27.741207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:16.599 [2024-12-06 11:12:27.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.599 [2024-12-06 11:12:27.741225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213a1f0 is same with the state(5) to be set 00:16:17.976 11:12:28 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:17.976 11:12:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.976 11:12:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.976 11:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 11:12:28 -- host/discovery.sh@59 -- # xargs 00:16:17.976 11:12:28 -- host/discovery.sh@59 -- # sort 00:16:17.976 11:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@129 -- # get_bdev_list 00:16:17.976 11:12:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.976 11:12:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.976 11:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 11:12:28 -- host/discovery.sh@55 -- # sort 00:16:17.976 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 11:12:28 -- host/discovery.sh@55 -- # xargs 00:16:17.976 11:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:17.976 11:12:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.976 11:12:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.976 11:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 11:12:28 -- host/discovery.sh@63 -- # sort -n 00:16:17.976 11:12:28 -- host/discovery.sh@63 -- # xargs 00:16:17.976 11:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@131 -- # get_notification_count 00:16:17.976 11:12:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.976 11:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 11:12:28 -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.976 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 11:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@74 -- # notification_count=0 00:16:17.976 11:12:28 -- host/discovery.sh@75 -- # notify_id=2 00:16:17.976 11:12:28 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:17.976 11:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.976 11:12:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.976 11:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.976 11:12:28 -- host/discovery.sh@135 -- # sleep 1 00:16:18.914 11:12:29 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:18.914 11:12:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.914 11:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.914 11:12:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.914 11:12:29 -- host/discovery.sh@59 -- # sort 00:16:18.914 11:12:29 -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 11:12:29 -- host/discovery.sh@59 -- # xargs 00:16:18.914 11:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.914 11:12:30 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:18.914 11:12:30 -- host/discovery.sh@137 -- # get_bdev_list 00:16:18.914 11:12:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.914 11:12:30 -- host/discovery.sh@55 -- # sort 00:16:18.914 11:12:30 -- host/discovery.sh@55 -- # xargs 00:16:18.914 11:12:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.914 11:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.914 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 11:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.172 11:12:30 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:19.172 11:12:30 -- host/discovery.sh@138 -- # get_notification_count 00:16:19.172 11:12:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:19.172 11:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.172 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:16:19.172 11:12:30 -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.172 11:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.172 11:12:30 -- host/discovery.sh@74 -- # notification_count=2 00:16:19.172 11:12:30 -- host/discovery.sh@75 -- # notify_id=4 00:16:19.172 11:12:30 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:19.172 11:12:30 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.172 11:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.172 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:16:20.108 [2024-12-06 11:12:31.154635] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:20.108 [2024-12-06 11:12:31.154867] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:20.108 [2024-12-06 11:12:31.154930] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:20.108 [2024-12-06 11:12:31.160677] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:20.108 [2024-12-06 11:12:31.219934] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:20.108 [2024-12-06 11:12:31.219972] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:20.108 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.108 11:12:31 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.108 11:12:31 -- common/autotest_common.sh@650 -- # local es=0 00:16:20.108 11:12:31 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.108 11:12:31 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:20.108 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.108 11:12:31 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:20.108 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.108 11:12:31 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.108 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.108 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.108 request: 00:16:20.108 { 00:16:20.108 "name": "nvme", 00:16:20.108 "trtype": "tcp", 00:16:20.108 "traddr": "10.0.0.2", 00:16:20.108 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:20.108 "adrfam": "ipv4", 00:16:20.108 "trsvcid": "8009", 00:16:20.108 "wait_for_attach": true, 00:16:20.108 "method": "bdev_nvme_start_discovery", 00:16:20.108 "req_id": 1 00:16:20.108 } 00:16:20.108 Got JSON-RPC error response 00:16:20.108 response: 00:16:20.108 { 00:16:20.108 "code": -17, 00:16:20.108 "message": "File exists" 00:16:20.108 } 00:16:20.108 11:12:31 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:20.108 11:12:31 -- common/autotest_common.sh@653 -- # es=1 00:16:20.108 11:12:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.108 11:12:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.108 11:12:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.108 11:12:31 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:20.108 11:12:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:20.108 11:12:31 -- host/discovery.sh@67 -- # sort 00:16:20.108 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.108 11:12:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:20.108 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.108 11:12:31 -- host/discovery.sh@67 -- # xargs 00:16:20.367 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:20.367 11:12:31 -- host/discovery.sh@147 -- # get_bdev_list 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.367 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.367 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # xargs 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # sort 00:16:20.367 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.367 11:12:31 -- common/autotest_common.sh@650 -- # local es=0 00:16:20.367 11:12:31 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.367 11:12:31 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.367 11:12:31 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.367 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.367 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.367 request: 00:16:20.367 { 00:16:20.367 "name": "nvme_second", 00:16:20.367 "trtype": "tcp", 00:16:20.367 "traddr": "10.0.0.2", 00:16:20.367 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:20.367 "adrfam": "ipv4", 00:16:20.367 "trsvcid": "8009", 00:16:20.367 "wait_for_attach": true, 00:16:20.367 "method": "bdev_nvme_start_discovery", 00:16:20.367 "req_id": 1 00:16:20.367 } 00:16:20.367 Got JSON-RPC error response 00:16:20.367 response: 00:16:20.367 { 00:16:20.367 "code": -17, 00:16:20.367 "message": "File exists" 00:16:20.367 } 00:16:20.367 11:12:31 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:20.367 11:12:31 -- common/autotest_common.sh@653 -- # es=1 00:16:20.367 11:12:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.367 11:12:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.367 11:12:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.367 11:12:31 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:20.367 11:12:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:20.367 11:12:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:20.367 11:12:31 -- host/discovery.sh@67 -- # sort 00:16:20.367 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.367 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.367 11:12:31 -- host/discovery.sh@67 -- # xargs 00:16:20.367 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:20.367 11:12:31 -- host/discovery.sh@153 -- # get_bdev_list 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # sort 00:16:20.367 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.367 11:12:31 -- host/discovery.sh@55 -- # xargs 00:16:20.367 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.367 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:20.367 11:12:31 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.367 11:12:31 -- common/autotest_common.sh@650 -- # local es=0 00:16:20.367 11:12:31 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.367 11:12:31 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:20.367 11:12:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.367 11:12:31 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.367 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.367 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:21.352 [2024-12-06 11:12:32.489965] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.352 [2024-12-06 11:12:32.490327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.352 [2024-12-06 11:12:32.490381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.352 [2024-12-06 11:12:32.490399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d35c0 with addr=10.0.0.2, port=8010 00:16:21.352 [2024-12-06 11:12:32.490424] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:21.352 [2024-12-06 11:12:32.490435] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.352 [2024-12-06 11:12:32.490445] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:22.728 [2024-12-06 11:12:33.489944] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:22.728 [2024-12-06 11:12:33.490266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:22.728 [2024-12-06 11:12:33.490360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:22.728 [2024-12-06 11:12:33.490475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195bc0 with addr=10.0.0.2, port=8010 00:16:22.728 [2024-12-06 11:12:33.490627] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:22.728 [2024-12-06 11:12:33.490643] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:22.728 [2024-12-06 11:12:33.490654] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:23.666 [2024-12-06 11:12:34.489808] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:23.666 request: 00:16:23.666 { 00:16:23.666 "name": "nvme_second", 00:16:23.666 "trtype": "tcp", 00:16:23.666 "traddr": "10.0.0.2", 00:16:23.666 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:23.666 "adrfam": "ipv4", 00:16:23.666 "trsvcid": "8010", 00:16:23.666 "attach_timeout_ms": 3000, 00:16:23.666 "method": "bdev_nvme_start_discovery", 00:16:23.666 "req_id": 1 00:16:23.666 } 00:16:23.666 Got JSON-RPC error response 00:16:23.666 response: 00:16:23.666 { 00:16:23.666 "code": -110, 00:16:23.666 "message": "Connection timed out" 00:16:23.666 } 00:16:23.666 11:12:34 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.666 11:12:34 -- common/autotest_common.sh@653 -- # es=1 00:16:23.666 11:12:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.666 11:12:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.666 11:12:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.666 11:12:34 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:23.666 11:12:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:23.666 11:12:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:23.666 11:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.666 11:12:34 -- host/discovery.sh@67 -- # sort 00:16:23.666 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.666 11:12:34 -- host/discovery.sh@67 -- # xargs 00:16:23.666 11:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.666 11:12:34 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:23.666 11:12:34 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:23.666 11:12:34 -- host/discovery.sh@162 -- # kill 82552 00:16:23.666 11:12:34 -- host/discovery.sh@163 -- # nvmftestfini 00:16:23.666 11:12:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:23.666 11:12:34 -- nvmf/common.sh@116 -- # sync 00:16:23.666 11:12:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:23.666 11:12:34 -- nvmf/common.sh@119 -- # set +e 00:16:23.666 11:12:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:23.666 11:12:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:23.666 rmmod nvme_tcp 00:16:23.666 rmmod nvme_fabrics 00:16:23.666 rmmod nvme_keyring 00:16:23.666 11:12:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:23.666 11:12:34 -- nvmf/common.sh@123 -- # set -e 00:16:23.666 11:12:34 -- nvmf/common.sh@124 -- # return 0 00:16:23.666 11:12:34 -- nvmf/common.sh@477 -- # '[' -n 82533 ']' 00:16:23.666 11:12:34 -- nvmf/common.sh@478 -- # killprocess 82533 00:16:23.666 11:12:34 -- common/autotest_common.sh@936 -- # '[' -z 82533 ']' 00:16:23.666 11:12:34 -- common/autotest_common.sh@940 -- # kill -0 82533 00:16:23.666 11:12:34 -- common/autotest_common.sh@941 -- # uname 00:16:23.666 11:12:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.666 11:12:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82533 00:16:23.666 killing process with pid 82533 00:16:23.666 11:12:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:23.666 11:12:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:23.666 11:12:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82533' 00:16:23.666 11:12:34 -- common/autotest_common.sh@955 -- # kill 82533 00:16:23.666 11:12:34 -- common/autotest_common.sh@960 -- # wait 82533 00:16:23.925 11:12:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:23.925 11:12:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:23.925 11:12:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:23.925 11:12:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.925 11:12:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:23.925 11:12:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.925 11:12:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.925 11:12:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.925 11:12:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:23.925 00:16:23.925 real 0m12.461s 00:16:23.925 user 0m24.175s 00:16:23.925 sys 0m2.099s 00:16:23.925 11:12:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:23.925 ************************************ 00:16:23.925 END TEST nvmf_discovery 00:16:23.925 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.925 ************************************ 00:16:23.925 11:12:34 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:23.925 11:12:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:23.925 11:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.925 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.925 ************************************ 00:16:23.925 START TEST nvmf_discovery_remove_ifc 00:16:23.925 ************************************ 00:16:23.925 11:12:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:23.925 * Looking for test storage... 00:16:23.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.925 11:12:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:23.925 11:12:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:23.925 11:12:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:24.183 11:12:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:24.183 11:12:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:24.183 11:12:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:24.183 11:12:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:24.183 11:12:35 -- scripts/common.sh@335 -- # IFS=.-: 00:16:24.183 11:12:35 -- scripts/common.sh@335 -- # read -ra ver1 00:16:24.183 11:12:35 -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.183 11:12:35 -- scripts/common.sh@336 -- # read -ra ver2 00:16:24.183 11:12:35 -- scripts/common.sh@337 -- # local 'op=<' 00:16:24.183 11:12:35 -- scripts/common.sh@339 -- # ver1_l=2 00:16:24.183 11:12:35 -- scripts/common.sh@340 -- # ver2_l=1 00:16:24.183 11:12:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:24.183 11:12:35 -- scripts/common.sh@343 -- # case "$op" in 00:16:24.183 11:12:35 -- scripts/common.sh@344 -- # : 1 00:16:24.183 11:12:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:24.183 11:12:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.183 11:12:35 -- scripts/common.sh@364 -- # decimal 1 00:16:24.183 11:12:35 -- scripts/common.sh@352 -- # local d=1 00:16:24.183 11:12:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.183 11:12:35 -- scripts/common.sh@354 -- # echo 1 00:16:24.183 11:12:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:24.183 11:12:35 -- scripts/common.sh@365 -- # decimal 2 00:16:24.183 11:12:35 -- scripts/common.sh@352 -- # local d=2 00:16:24.183 11:12:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.183 11:12:35 -- scripts/common.sh@354 -- # echo 2 00:16:24.183 11:12:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:24.183 11:12:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:24.183 11:12:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:24.183 11:12:35 -- scripts/common.sh@367 -- # return 0 00:16:24.183 11:12:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.183 11:12:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:24.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.183 --rc genhtml_branch_coverage=1 00:16:24.183 --rc genhtml_function_coverage=1 00:16:24.183 --rc genhtml_legend=1 00:16:24.183 --rc geninfo_all_blocks=1 00:16:24.183 --rc geninfo_unexecuted_blocks=1 00:16:24.183 00:16:24.183 ' 00:16:24.183 11:12:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:24.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.183 --rc genhtml_branch_coverage=1 00:16:24.183 --rc genhtml_function_coverage=1 00:16:24.183 --rc genhtml_legend=1 00:16:24.183 --rc geninfo_all_blocks=1 00:16:24.183 --rc geninfo_unexecuted_blocks=1 00:16:24.183 00:16:24.183 ' 00:16:24.183 11:12:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:24.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.183 --rc genhtml_branch_coverage=1 00:16:24.183 --rc genhtml_function_coverage=1 00:16:24.183 --rc genhtml_legend=1 00:16:24.183 --rc geninfo_all_blocks=1 00:16:24.183 --rc geninfo_unexecuted_blocks=1 00:16:24.183 00:16:24.183 ' 00:16:24.183 11:12:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:24.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.183 --rc genhtml_branch_coverage=1 00:16:24.183 --rc genhtml_function_coverage=1 00:16:24.183 --rc genhtml_legend=1 00:16:24.183 --rc geninfo_all_blocks=1 00:16:24.183 --rc geninfo_unexecuted_blocks=1 00:16:24.183 00:16:24.183 ' 00:16:24.183 11:12:35 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.183 11:12:35 -- nvmf/common.sh@7 -- # uname -s 00:16:24.183 11:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.183 11:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.183 11:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.183 11:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.183 11:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.183 11:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.183 11:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.183 11:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.183 11:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.183 11:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.183 11:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:24.183 11:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:24.183 11:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.183 11:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.183 11:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.183 11:12:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.183 11:12:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.183 11:12:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.183 11:12:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.183 11:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.183 11:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.183 11:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.183 11:12:35 -- paths/export.sh@5 -- # export PATH 00:16:24.184 11:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.184 11:12:35 -- nvmf/common.sh@46 -- # : 0 00:16:24.184 11:12:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:24.184 11:12:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:24.184 11:12:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:24.184 11:12:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.184 11:12:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.184 11:12:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:24.184 11:12:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:24.184 11:12:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:24.184 11:12:35 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:24.184 11:12:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:24.184 11:12:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.184 11:12:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:24.184 11:12:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:24.184 11:12:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:24.184 11:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.184 11:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.184 11:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.184 11:12:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:24.184 11:12:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:24.184 11:12:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:24.184 11:12:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:24.184 11:12:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:24.184 11:12:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:24.184 11:12:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.184 11:12:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.184 11:12:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.184 11:12:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:24.184 11:12:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.184 11:12:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.184 11:12:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.184 11:12:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.184 11:12:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.184 11:12:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.184 11:12:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.184 11:12:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.184 11:12:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:24.184 11:12:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:24.184 Cannot find device "nvmf_tgt_br" 00:16:24.184 11:12:35 -- nvmf/common.sh@154 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.184 Cannot find device "nvmf_tgt_br2" 00:16:24.184 11:12:35 -- nvmf/common.sh@155 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:24.184 11:12:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:24.184 Cannot find device "nvmf_tgt_br" 00:16:24.184 11:12:35 -- nvmf/common.sh@157 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:24.184 Cannot find device "nvmf_tgt_br2" 00:16:24.184 11:12:35 -- nvmf/common.sh@158 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:24.184 11:12:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:24.184 11:12:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.184 11:12:35 -- nvmf/common.sh@161 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.184 11:12:35 -- nvmf/common.sh@162 -- # true 00:16:24.184 11:12:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.184 11:12:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.184 11:12:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.184 11:12:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.184 11:12:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.184 11:12:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.441 11:12:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.441 11:12:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.441 11:12:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.441 11:12:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:24.441 11:12:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:24.441 11:12:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:24.441 11:12:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:24.441 11:12:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.441 11:12:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.441 11:12:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.441 11:12:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:24.441 11:12:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:24.441 11:12:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.441 11:12:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.441 11:12:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.441 11:12:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.441 11:12:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.441 11:12:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:24.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:24.441 00:16:24.441 --- 10.0.0.2 ping statistics --- 00:16:24.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.441 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:24.441 11:12:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:24.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:24.441 00:16:24.441 --- 10.0.0.3 ping statistics --- 00:16:24.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.441 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:24.441 11:12:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:24.441 00:16:24.441 --- 10.0.0.1 ping statistics --- 00:16:24.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.441 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:24.441 11:12:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.441 11:12:35 -- nvmf/common.sh@421 -- # return 0 00:16:24.441 11:12:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.441 11:12:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.441 11:12:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.441 11:12:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.441 11:12:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.441 11:12:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.441 11:12:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:24.441 11:12:35 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:24.441 11:12:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:24.441 11:12:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.441 11:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:24.441 11:12:35 -- nvmf/common.sh@469 -- # nvmfpid=83045 00:16:24.441 11:12:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:24.442 11:12:35 -- nvmf/common.sh@470 -- # waitforlisten 83045 00:16:24.442 11:12:35 -- common/autotest_common.sh@829 -- # '[' -z 83045 ']' 00:16:24.442 11:12:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.442 11:12:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.442 11:12:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.442 11:12:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.442 11:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:24.442 [2024-12-06 11:12:35.533323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:24.442 [2024-12-06 11:12:35.533417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.698 [2024-12-06 11:12:35.665182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.698 [2024-12-06 11:12:35.698130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.698 [2024-12-06 11:12:35.698284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.698 [2024-12-06 11:12:35.698297] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.698 [2024-12-06 11:12:35.698305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.698 [2024-12-06 11:12:35.698335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.632 11:12:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.632 11:12:36 -- common/autotest_common.sh@862 -- # return 0 00:16:25.632 11:12:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:25.632 11:12:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.632 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:25.632 11:12:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.632 11:12:36 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:25.632 11:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.632 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:25.632 [2024-12-06 11:12:36.554002] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.632 [2024-12-06 11:12:36.562085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:25.632 null0 00:16:25.632 [2024-12-06 11:12:36.594037] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.632 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:25.632 11:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.632 11:12:36 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83078 00:16:25.632 11:12:36 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:25.632 11:12:36 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83078 /tmp/host.sock 00:16:25.632 11:12:36 -- common/autotest_common.sh@829 -- # '[' -z 83078 ']' 00:16:25.632 11:12:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:25.632 11:12:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.632 11:12:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:25.632 11:12:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.632 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:25.632 [2024-12-06 11:12:36.657492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.632 [2024-12-06 11:12:36.657810] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83078 ] 00:16:25.891 [2024-12-06 11:12:36.794112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.891 [2024-12-06 11:12:36.834108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.891 [2024-12-06 11:12:36.834591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.891 11:12:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.891 11:12:36 -- common/autotest_common.sh@862 -- # return 0 00:16:25.891 11:12:36 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.891 11:12:36 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:25.891 11:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.891 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:25.891 11:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.891 11:12:36 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:25.891 11:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.891 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:25.891 11:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.891 11:12:36 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:25.891 11:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.891 11:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.269 [2024-12-06 11:12:37.976972] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:27.269 [2024-12-06 11:12:37.977239] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:27.269 [2024-12-06 11:12:37.977271] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:27.269 [2024-12-06 11:12:37.983021] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:27.269 [2024-12-06 11:12:38.039151] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:27.269 [2024-12-06 11:12:38.039395] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:27.269 [2024-12-06 11:12:38.039469] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:27.269 [2024-12-06 11:12:38.039667] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:27.269 [2024-12-06 11:12:38.039748] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:27.269 11:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.269 [2024-12-06 11:12:38.045385] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14692c0 was disconnected and freed. delete nvme_qpair. 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.269 11:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.269 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.269 11:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.269 11:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.269 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.270 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:27.270 11:12:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.270 11:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.270 11:12:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:27.270 11:12:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.233 11:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.233 11:12:39 -- common/autotest_common.sh@10 -- # set +x 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.233 11:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.233 11:12:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.172 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.172 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.172 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.172 11:12:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.550 11:12:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.550 11:12:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.550 11:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.550 11:12:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.550 11:12:41 -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 11:12:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.550 11:12:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.551 11:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.551 11:12:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:30.551 11:12:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.489 11:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.489 11:12:42 -- common/autotest_common.sh@10 -- # set +x 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.489 11:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:31.489 11:12:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.427 11:12:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.427 11:12:43 -- common/autotest_common.sh@10 -- # set +x 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.427 11:12:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.427 11:12:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.427 [2024-12-06 11:12:43.467297] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:32.427 [2024-12-06 11:12:43.467375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.427 [2024-12-06 11:12:43.467392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.427 [2024-12-06 11:12:43.467405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.427 [2024-12-06 11:12:43.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.427 [2024-12-06 11:12:43.467424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.427 [2024-12-06 11:12:43.467434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.427 [2024-12-06 11:12:43.467444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.427 [2024-12-06 11:12:43.467453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.427 [2024-12-06 11:12:43.467463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.427 [2024-12-06 11:12:43.467473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.427 [2024-12-06 11:12:43.467482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142d6c0 is same with the state(5) to be set 00:16:32.427 [2024-12-06 11:12:43.477292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142d6c0 (9): Bad file descriptor 00:16:32.427 [2024-12-06 11:12:43.487310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:33.364 11:12:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.364 11:12:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.364 11:12:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.364 11:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.364 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:16:33.364 11:12:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.364 11:12:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.364 [2024-12-06 11:12:44.491621] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:34.739 [2024-12-06 11:12:45.515660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:35.678 [2024-12-06 11:12:46.538693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:35.678 [2024-12-06 11:12:46.539082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142d6c0 with addr=10.0.0.2, port=4420 00:16:35.678 [2024-12-06 11:12:46.539135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142d6c0 is same with the state(5) to be set 00:16:35.678 [2024-12-06 11:12:46.539190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:35.678 [2024-12-06 11:12:46.539213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:35.678 [2024-12-06 11:12:46.539268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:35.678 [2024-12-06 11:12:46.539291] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:35.678 [2024-12-06 11:12:46.540121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142d6c0 (9): Bad file descriptor 00:16:35.678 [2024-12-06 11:12:46.540186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:35.678 [2024-12-06 11:12:46.540236] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:35.678 [2024-12-06 11:12:46.540304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.678 [2024-12-06 11:12:46.540335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.678 [2024-12-06 11:12:46.540362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.678 [2024-12-06 11:12:46.540383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.678 [2024-12-06 11:12:46.540405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.678 [2024-12-06 11:12:46.540426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.678 [2024-12-06 11:12:46.540449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.678 [2024-12-06 11:12:46.540469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.678 [2024-12-06 11:12:46.540493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.678 [2024-12-06 11:12:46.540513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.678 [2024-12-06 11:12:46.540534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:35.678 [2024-12-06 11:12:46.540593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142dad0 (9): Bad file descriptor 00:16:35.678 [2024-12-06 11:12:46.541213] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:35.678 [2024-12-06 11:12:46.541254] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:35.678 11:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.678 11:12:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.678 11:12:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.616 11:12:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.616 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.616 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.617 11:12:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.617 11:12:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.617 11:12:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.617 11:12:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.617 11:12:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.617 11:12:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:36.617 11:12:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.567 [2024-12-06 11:12:48.552467] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:37.567 [2024-12-06 11:12:48.552727] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:37.567 [2024-12-06 11:12:48.552808] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:37.567 [2024-12-06 11:12:48.558500] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:37.567 [2024-12-06 11:12:48.613675] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:37.567 [2024-12-06 11:12:48.613717] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:37.567 [2024-12-06 11:12:48.613739] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:37.567 [2024-12-06 11:12:48.613753] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:37.567 [2024-12-06 11:12:48.613778] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:37.567 [2024-12-06 11:12:48.620872] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x143a930 was disconnected and freed. delete nvme_qpair. 00:16:37.567 11:12:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.567 11:12:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.567 11:12:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.567 11:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.567 11:12:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.567 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:16:37.567 11:12:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.827 11:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.827 11:12:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:37.827 11:12:48 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:37.827 11:12:48 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83078 00:16:37.827 11:12:48 -- common/autotest_common.sh@936 -- # '[' -z 83078 ']' 00:16:37.827 11:12:48 -- common/autotest_common.sh@940 -- # kill -0 83078 00:16:37.827 11:12:48 -- common/autotest_common.sh@941 -- # uname 00:16:37.827 11:12:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.827 11:12:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83078 00:16:37.827 killing process with pid 83078 00:16:37.827 11:12:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.827 11:12:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.827 11:12:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83078' 00:16:37.827 11:12:48 -- common/autotest_common.sh@955 -- # kill 83078 00:16:37.827 11:12:48 -- common/autotest_common.sh@960 -- # wait 83078 00:16:37.827 11:12:48 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:37.827 11:12:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.827 11:12:48 -- nvmf/common.sh@116 -- # sync 00:16:38.087 11:12:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:38.087 11:12:49 -- nvmf/common.sh@119 -- # set +e 00:16:38.087 11:12:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.087 11:12:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:38.087 rmmod nvme_tcp 00:16:38.087 rmmod nvme_fabrics 00:16:38.087 rmmod nvme_keyring 00:16:38.087 11:12:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.087 11:12:49 -- nvmf/common.sh@123 -- # set -e 00:16:38.087 11:12:49 -- nvmf/common.sh@124 -- # return 0 00:16:38.087 11:12:49 -- nvmf/common.sh@477 -- # '[' -n 83045 ']' 00:16:38.087 11:12:49 -- nvmf/common.sh@478 -- # killprocess 83045 00:16:38.087 11:12:49 -- common/autotest_common.sh@936 -- # '[' -z 83045 ']' 00:16:38.087 11:12:49 -- common/autotest_common.sh@940 -- # kill -0 83045 00:16:38.087 11:12:49 -- common/autotest_common.sh@941 -- # uname 00:16:38.087 11:12:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.087 11:12:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83045 00:16:38.087 killing process with pid 83045 00:16:38.087 11:12:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:38.087 11:12:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:38.087 11:12:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83045' 00:16:38.087 11:12:49 -- common/autotest_common.sh@955 -- # kill 83045 00:16:38.087 11:12:49 -- common/autotest_common.sh@960 -- # wait 83045 00:16:38.346 11:12:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:38.346 11:12:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:38.346 11:12:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:38.346 11:12:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.346 11:12:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:38.346 11:12:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.346 11:12:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.346 11:12:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.346 11:12:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:38.346 00:16:38.346 real 0m14.375s 00:16:38.346 user 0m22.603s 00:16:38.346 sys 0m2.367s 00:16:38.346 11:12:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:38.346 ************************************ 00:16:38.346 END TEST nvmf_discovery_remove_ifc 00:16:38.346 ************************************ 00:16:38.346 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.346 11:12:49 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:38.346 11:12:49 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:38.346 11:12:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.346 11:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.346 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.346 ************************************ 00:16:38.346 START TEST nvmf_digest 00:16:38.346 ************************************ 00:16:38.346 11:12:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:38.346 * Looking for test storage... 00:16:38.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:38.346 11:12:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:38.346 11:12:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:38.346 11:12:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:38.606 11:12:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:38.606 11:12:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:38.606 11:12:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:38.606 11:12:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:38.606 11:12:49 -- scripts/common.sh@335 -- # IFS=.-: 00:16:38.606 11:12:49 -- scripts/common.sh@335 -- # read -ra ver1 00:16:38.606 11:12:49 -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.606 11:12:49 -- scripts/common.sh@336 -- # read -ra ver2 00:16:38.606 11:12:49 -- scripts/common.sh@337 -- # local 'op=<' 00:16:38.606 11:12:49 -- scripts/common.sh@339 -- # ver1_l=2 00:16:38.606 11:12:49 -- scripts/common.sh@340 -- # ver2_l=1 00:16:38.606 11:12:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:38.606 11:12:49 -- scripts/common.sh@343 -- # case "$op" in 00:16:38.606 11:12:49 -- scripts/common.sh@344 -- # : 1 00:16:38.606 11:12:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:38.606 11:12:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.606 11:12:49 -- scripts/common.sh@364 -- # decimal 1 00:16:38.606 11:12:49 -- scripts/common.sh@352 -- # local d=1 00:16:38.606 11:12:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.606 11:12:49 -- scripts/common.sh@354 -- # echo 1 00:16:38.606 11:12:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:38.606 11:12:49 -- scripts/common.sh@365 -- # decimal 2 00:16:38.606 11:12:49 -- scripts/common.sh@352 -- # local d=2 00:16:38.606 11:12:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.606 11:12:49 -- scripts/common.sh@354 -- # echo 2 00:16:38.606 11:12:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:38.606 11:12:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:38.606 11:12:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:38.606 11:12:49 -- scripts/common.sh@367 -- # return 0 00:16:38.606 11:12:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.606 11:12:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.606 --rc genhtml_branch_coverage=1 00:16:38.606 --rc genhtml_function_coverage=1 00:16:38.606 --rc genhtml_legend=1 00:16:38.606 --rc geninfo_all_blocks=1 00:16:38.606 --rc geninfo_unexecuted_blocks=1 00:16:38.606 00:16:38.606 ' 00:16:38.606 11:12:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.606 --rc genhtml_branch_coverage=1 00:16:38.606 --rc genhtml_function_coverage=1 00:16:38.606 --rc genhtml_legend=1 00:16:38.606 --rc geninfo_all_blocks=1 00:16:38.606 --rc geninfo_unexecuted_blocks=1 00:16:38.606 00:16:38.606 ' 00:16:38.606 11:12:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.606 --rc genhtml_branch_coverage=1 00:16:38.606 --rc genhtml_function_coverage=1 00:16:38.606 --rc genhtml_legend=1 00:16:38.606 --rc geninfo_all_blocks=1 00:16:38.606 --rc geninfo_unexecuted_blocks=1 00:16:38.606 00:16:38.606 ' 00:16:38.606 11:12:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.606 --rc genhtml_branch_coverage=1 00:16:38.606 --rc genhtml_function_coverage=1 00:16:38.606 --rc genhtml_legend=1 00:16:38.606 --rc geninfo_all_blocks=1 00:16:38.606 --rc geninfo_unexecuted_blocks=1 00:16:38.606 00:16:38.606 ' 00:16:38.606 11:12:49 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.606 11:12:49 -- nvmf/common.sh@7 -- # uname -s 00:16:38.606 11:12:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.606 11:12:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.606 11:12:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.606 11:12:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.606 11:12:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.606 11:12:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.606 11:12:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.606 11:12:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.606 11:12:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.606 11:12:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:38.606 11:12:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:16:38.606 11:12:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.606 11:12:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.606 11:12:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.606 11:12:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.606 11:12:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.606 11:12:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.606 11:12:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.606 11:12:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.606 11:12:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.606 11:12:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.606 11:12:49 -- paths/export.sh@5 -- # export PATH 00:16:38.606 11:12:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.606 11:12:49 -- nvmf/common.sh@46 -- # : 0 00:16:38.606 11:12:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:38.606 11:12:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:38.606 11:12:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:38.606 11:12:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.606 11:12:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.606 11:12:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:38.606 11:12:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:38.606 11:12:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:38.606 11:12:49 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:38.606 11:12:49 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:38.606 11:12:49 -- host/digest.sh@16 -- # runtime=2 00:16:38.606 11:12:49 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:38.606 11:12:49 -- host/digest.sh@132 -- # nvmftestinit 00:16:38.606 11:12:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:38.606 11:12:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.606 11:12:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:38.606 11:12:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:38.606 11:12:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:38.606 11:12:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.606 11:12:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.606 11:12:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.606 11:12:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:38.606 11:12:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:38.606 11:12:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.606 11:12:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.606 11:12:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.606 11:12:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:38.606 11:12:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.606 11:12:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.606 11:12:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.606 11:12:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.606 11:12:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.606 11:12:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.606 11:12:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.606 11:12:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.606 11:12:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:38.606 11:12:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:38.606 Cannot find device "nvmf_tgt_br" 00:16:38.606 11:12:49 -- nvmf/common.sh@154 -- # true 00:16:38.606 11:12:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.606 Cannot find device "nvmf_tgt_br2" 00:16:38.606 11:12:49 -- nvmf/common.sh@155 -- # true 00:16:38.607 11:12:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:38.607 11:12:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:38.607 Cannot find device "nvmf_tgt_br" 00:16:38.607 11:12:49 -- nvmf/common.sh@157 -- # true 00:16:38.607 11:12:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:38.607 Cannot find device "nvmf_tgt_br2" 00:16:38.607 11:12:49 -- nvmf/common.sh@158 -- # true 00:16:38.607 11:12:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:38.607 11:12:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:38.607 11:12:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.607 11:12:49 -- nvmf/common.sh@161 -- # true 00:16:38.607 11:12:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.607 11:12:49 -- nvmf/common.sh@162 -- # true 00:16:38.607 11:12:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.607 11:12:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.607 11:12:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.607 11:12:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.607 11:12:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.607 11:12:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.866 11:12:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.866 11:12:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.866 11:12:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.866 11:12:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:38.866 11:12:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:38.866 11:12:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:38.866 11:12:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:38.866 11:12:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.866 11:12:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.866 11:12:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.866 11:12:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:38.866 11:12:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:38.866 11:12:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.866 11:12:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.866 11:12:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.866 11:12:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.866 11:12:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.866 11:12:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:38.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:38.866 00:16:38.866 --- 10.0.0.2 ping statistics --- 00:16:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.866 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:38.866 11:12:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:38.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:38.866 00:16:38.866 --- 10.0.0.3 ping statistics --- 00:16:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.866 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:38.866 11:12:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:38.866 00:16:38.866 --- 10.0.0.1 ping statistics --- 00:16:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.866 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:38.866 11:12:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.866 11:12:49 -- nvmf/common.sh@421 -- # return 0 00:16:38.866 11:12:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:38.866 11:12:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.866 11:12:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:38.866 11:12:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:38.866 11:12:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.866 11:12:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:38.866 11:12:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:38.866 11:12:49 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:38.866 11:12:49 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:38.866 11:12:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:38.866 11:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.866 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.866 ************************************ 00:16:38.866 START TEST nvmf_digest_clean 00:16:38.866 ************************************ 00:16:38.866 11:12:49 -- common/autotest_common.sh@1114 -- # run_digest 00:16:38.866 11:12:49 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:38.866 11:12:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:38.866 11:12:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.866 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.866 11:12:49 -- nvmf/common.sh@469 -- # nvmfpid=83494 00:16:38.866 11:12:49 -- nvmf/common.sh@470 -- # waitforlisten 83494 00:16:38.866 11:12:49 -- common/autotest_common.sh@829 -- # '[' -z 83494 ']' 00:16:38.866 11:12:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:38.866 11:12:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.866 11:12:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.866 11:12:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.866 11:12:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.866 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.866 [2024-12-06 11:12:49.969345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.866 [2024-12-06 11:12:49.969444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.125 [2024-12-06 11:12:50.107795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.125 [2024-12-06 11:12:50.145944] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.125 [2024-12-06 11:12:50.146115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.125 [2024-12-06 11:12:50.146130] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.125 [2024-12-06 11:12:50.146140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.125 [2024-12-06 11:12:50.146175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.125 11:12:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.125 11:12:50 -- common/autotest_common.sh@862 -- # return 0 00:16:39.125 11:12:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:39.125 11:12:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.125 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:39.125 11:12:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.125 11:12:50 -- host/digest.sh@120 -- # common_target_config 00:16:39.125 11:12:50 -- host/digest.sh@43 -- # rpc_cmd 00:16:39.125 11:12:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.125 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:39.385 null0 00:16:39.385 [2024-12-06 11:12:50.327596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.385 [2024-12-06 11:12:50.351723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.385 11:12:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.385 11:12:50 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:39.385 11:12:50 -- host/digest.sh@77 -- # local rw bs qd 00:16:39.385 11:12:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:39.385 11:12:50 -- host/digest.sh@80 -- # rw=randread 00:16:39.385 11:12:50 -- host/digest.sh@80 -- # bs=4096 00:16:39.385 11:12:50 -- host/digest.sh@80 -- # qd=128 00:16:39.385 11:12:50 -- host/digest.sh@82 -- # bperfpid=83518 00:16:39.385 11:12:50 -- host/digest.sh@83 -- # waitforlisten 83518 /var/tmp/bperf.sock 00:16:39.385 11:12:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:39.385 11:12:50 -- common/autotest_common.sh@829 -- # '[' -z 83518 ']' 00:16:39.385 11:12:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:39.385 11:12:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:39.385 11:12:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:39.385 11:12:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.385 11:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:39.385 [2024-12-06 11:12:50.408165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.385 [2024-12-06 11:12:50.408277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83518 ] 00:16:39.645 [2024-12-06 11:12:50.549676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.645 [2024-12-06 11:12:50.585992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.582 11:12:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.582 11:12:51 -- common/autotest_common.sh@862 -- # return 0 00:16:40.582 11:12:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:40.582 11:12:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:40.582 11:12:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:40.582 11:12:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.582 11:12:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:41.168 nvme0n1 00:16:41.168 11:12:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:41.168 11:12:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:41.168 Running I/O for 2 seconds... 00:16:43.101 00:16:43.101 Latency(us) 00:16:43.101 [2024-12-06T11:12:54.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.101 [2024-12-06T11:12:54.248Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:43.101 nvme0n1 : 2.01 16207.84 63.31 0.00 0.00 7891.98 6821.70 18230.92 00:16:43.101 [2024-12-06T11:12:54.248Z] =================================================================================================================== 00:16:43.101 [2024-12-06T11:12:54.248Z] Total : 16207.84 63.31 0.00 0.00 7891.98 6821.70 18230.92 00:16:43.101 0 00:16:43.101 11:12:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:43.101 11:12:54 -- host/digest.sh@92 -- # get_accel_stats 00:16:43.101 11:12:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:43.101 11:12:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:43.101 11:12:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:43.101 | select(.opcode=="crc32c") 00:16:43.101 | "\(.module_name) \(.executed)"' 00:16:43.360 11:12:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:43.360 11:12:54 -- host/digest.sh@93 -- # exp_module=software 00:16:43.360 11:12:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:43.360 11:12:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.360 11:12:54 -- host/digest.sh@97 -- # killprocess 83518 00:16:43.360 11:12:54 -- common/autotest_common.sh@936 -- # '[' -z 83518 ']' 00:16:43.360 11:12:54 -- common/autotest_common.sh@940 -- # kill -0 83518 00:16:43.360 11:12:54 -- common/autotest_common.sh@941 -- # uname 00:16:43.360 11:12:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.360 11:12:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83518 00:16:43.618 11:12:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:43.618 killing process with pid 83518 00:16:43.618 11:12:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:43.618 11:12:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83518' 00:16:43.618 Received shutdown signal, test time was about 2.000000 seconds 00:16:43.618 00:16:43.618 Latency(us) 00:16:43.618 [2024-12-06T11:12:54.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.618 [2024-12-06T11:12:54.765Z] =================================================================================================================== 00:16:43.618 [2024-12-06T11:12:54.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.618 11:12:54 -- common/autotest_common.sh@955 -- # kill 83518 00:16:43.618 11:12:54 -- common/autotest_common.sh@960 -- # wait 83518 00:16:43.618 11:12:54 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:43.618 11:12:54 -- host/digest.sh@77 -- # local rw bs qd 00:16:43.618 11:12:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:43.618 11:12:54 -- host/digest.sh@80 -- # rw=randread 00:16:43.618 11:12:54 -- host/digest.sh@80 -- # bs=131072 00:16:43.618 11:12:54 -- host/digest.sh@80 -- # qd=16 00:16:43.618 11:12:54 -- host/digest.sh@82 -- # bperfpid=83583 00:16:43.618 11:12:54 -- host/digest.sh@83 -- # waitforlisten 83583 /var/tmp/bperf.sock 00:16:43.618 11:12:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:43.618 11:12:54 -- common/autotest_common.sh@829 -- # '[' -z 83583 ']' 00:16:43.618 11:12:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:43.618 11:12:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.618 11:12:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:43.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:43.618 11:12:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.619 11:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:43.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:43.619 Zero copy mechanism will not be used. 00:16:43.619 [2024-12-06 11:12:54.723168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:43.619 [2024-12-06 11:12:54.723308] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83583 ] 00:16:43.877 [2024-12-06 11:12:54.858502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.877 [2024-12-06 11:12:54.892839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.877 11:12:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.877 11:12:54 -- common/autotest_common.sh@862 -- # return 0 00:16:43.877 11:12:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:43.877 11:12:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:43.877 11:12:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:44.135 11:12:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.135 11:12:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.394 nvme0n1 00:16:44.651 11:12:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:44.651 11:12:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:44.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:44.651 Zero copy mechanism will not be used. 00:16:44.651 Running I/O for 2 seconds... 00:16:46.554 00:16:46.554 Latency(us) 00:16:46.554 [2024-12-06T11:12:57.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.554 [2024-12-06T11:12:57.701Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:46.554 nvme0n1 : 2.00 8006.41 1000.80 0.00 0.00 1995.34 1765.00 3574.69 00:16:46.554 [2024-12-06T11:12:57.701Z] =================================================================================================================== 00:16:46.554 [2024-12-06T11:12:57.701Z] Total : 8006.41 1000.80 0.00 0.00 1995.34 1765.00 3574.69 00:16:46.554 0 00:16:46.554 11:12:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:46.554 11:12:57 -- host/digest.sh@92 -- # get_accel_stats 00:16:46.554 11:12:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:46.554 11:12:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:46.554 | select(.opcode=="crc32c") 00:16:46.554 | "\(.module_name) \(.executed)"' 00:16:46.554 11:12:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:46.813 11:12:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:46.813 11:12:57 -- host/digest.sh@93 -- # exp_module=software 00:16:46.813 11:12:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:46.813 11:12:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:46.813 11:12:57 -- host/digest.sh@97 -- # killprocess 83583 00:16:46.813 11:12:57 -- common/autotest_common.sh@936 -- # '[' -z 83583 ']' 00:16:46.813 11:12:57 -- common/autotest_common.sh@940 -- # kill -0 83583 00:16:46.813 11:12:57 -- common/autotest_common.sh@941 -- # uname 00:16:46.813 11:12:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.813 11:12:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83583 00:16:47.072 11:12:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:47.072 11:12:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:47.072 killing process with pid 83583 00:16:47.072 11:12:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83583' 00:16:47.072 Received shutdown signal, test time was about 2.000000 seconds 00:16:47.072 00:16:47.072 Latency(us) 00:16:47.073 [2024-12-06T11:12:58.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.073 [2024-12-06T11:12:58.220Z] =================================================================================================================== 00:16:47.073 [2024-12-06T11:12:58.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.073 11:12:57 -- common/autotest_common.sh@955 -- # kill 83583 00:16:47.073 11:12:57 -- common/autotest_common.sh@960 -- # wait 83583 00:16:47.073 11:12:58 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:47.073 11:12:58 -- host/digest.sh@77 -- # local rw bs qd 00:16:47.073 11:12:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:47.073 11:12:58 -- host/digest.sh@80 -- # rw=randwrite 00:16:47.073 11:12:58 -- host/digest.sh@80 -- # bs=4096 00:16:47.073 11:12:58 -- host/digest.sh@80 -- # qd=128 00:16:47.073 11:12:58 -- host/digest.sh@82 -- # bperfpid=83630 00:16:47.073 11:12:58 -- host/digest.sh@83 -- # waitforlisten 83630 /var/tmp/bperf.sock 00:16:47.073 11:12:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:47.073 11:12:58 -- common/autotest_common.sh@829 -- # '[' -z 83630 ']' 00:16:47.073 11:12:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:47.073 11:12:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:47.073 11:12:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:47.073 11:12:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.073 11:12:58 -- common/autotest_common.sh@10 -- # set +x 00:16:47.073 [2024-12-06 11:12:58.164810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:47.073 [2024-12-06 11:12:58.164924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83630 ] 00:16:47.332 [2024-12-06 11:12:58.304195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.332 [2024-12-06 11:12:58.337224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.332 11:12:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.332 11:12:58 -- common/autotest_common.sh@862 -- # return 0 00:16:47.332 11:12:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:47.332 11:12:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:47.332 11:12:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:47.591 11:12:58 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.591 11:12:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.160 nvme0n1 00:16:48.160 11:12:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:48.160 11:12:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:48.160 Running I/O for 2 seconds... 00:16:50.068 00:16:50.068 Latency(us) 00:16:50.068 [2024-12-06T11:13:01.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.068 [2024-12-06T11:13:01.215Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.068 nvme0n1 : 2.00 17643.93 68.92 0.00 0.00 7248.10 5928.03 15728.64 00:16:50.068 [2024-12-06T11:13:01.215Z] =================================================================================================================== 00:16:50.068 [2024-12-06T11:13:01.215Z] Total : 17643.93 68.92 0.00 0.00 7248.10 5928.03 15728.64 00:16:50.068 0 00:16:50.068 11:13:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:50.068 11:13:01 -- host/digest.sh@92 -- # get_accel_stats 00:16:50.068 11:13:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:50.068 11:13:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:50.068 11:13:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:50.068 | select(.opcode=="crc32c") 00:16:50.068 | "\(.module_name) \(.executed)"' 00:16:50.637 11:13:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:50.637 11:13:01 -- host/digest.sh@93 -- # exp_module=software 00:16:50.637 11:13:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:50.637 11:13:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:50.637 11:13:01 -- host/digest.sh@97 -- # killprocess 83630 00:16:50.637 11:13:01 -- common/autotest_common.sh@936 -- # '[' -z 83630 ']' 00:16:50.637 11:13:01 -- common/autotest_common.sh@940 -- # kill -0 83630 00:16:50.637 11:13:01 -- common/autotest_common.sh@941 -- # uname 00:16:50.637 11:13:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.637 11:13:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83630 00:16:50.637 killing process with pid 83630 00:16:50.637 Received shutdown signal, test time was about 2.000000 seconds 00:16:50.637 00:16:50.637 Latency(us) 00:16:50.637 [2024-12-06T11:13:01.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.637 [2024-12-06T11:13:01.784Z] =================================================================================================================== 00:16:50.637 [2024-12-06T11:13:01.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.637 11:13:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:50.637 11:13:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:50.637 11:13:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83630' 00:16:50.637 11:13:01 -- common/autotest_common.sh@955 -- # kill 83630 00:16:50.637 11:13:01 -- common/autotest_common.sh@960 -- # wait 83630 00:16:50.637 11:13:01 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:50.637 11:13:01 -- host/digest.sh@77 -- # local rw bs qd 00:16:50.637 11:13:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:50.637 11:13:01 -- host/digest.sh@80 -- # rw=randwrite 00:16:50.637 11:13:01 -- host/digest.sh@80 -- # bs=131072 00:16:50.637 11:13:01 -- host/digest.sh@80 -- # qd=16 00:16:50.637 11:13:01 -- host/digest.sh@82 -- # bperfpid=83683 00:16:50.637 11:13:01 -- host/digest.sh@83 -- # waitforlisten 83683 /var/tmp/bperf.sock 00:16:50.637 11:13:01 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:50.637 11:13:01 -- common/autotest_common.sh@829 -- # '[' -z 83683 ']' 00:16:50.637 11:13:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:50.637 11:13:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.637 11:13:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:50.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:50.637 11:13:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.637 11:13:01 -- common/autotest_common.sh@10 -- # set +x 00:16:50.637 [2024-12-06 11:13:01.693493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.637 [2024-12-06 11:13:01.694338] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83683 ] 00:16:50.637 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:50.637 Zero copy mechanism will not be used. 00:16:50.896 [2024-12-06 11:13:01.831343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.896 [2024-12-06 11:13:01.864666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.896 11:13:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.896 11:13:01 -- common/autotest_common.sh@862 -- # return 0 00:16:50.896 11:13:01 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:50.896 11:13:01 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:50.896 11:13:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:51.156 11:13:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.156 11:13:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.728 nvme0n1 00:16:51.728 11:13:02 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:51.728 11:13:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:51.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.728 Zero copy mechanism will not be used. 00:16:51.728 Running I/O for 2 seconds... 00:16:53.636 00:16:53.636 Latency(us) 00:16:53.636 [2024-12-06T11:13:04.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.636 [2024-12-06T11:13:04.783Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:53.636 nvme0n1 : 2.00 6878.27 859.78 0.00 0.00 2320.93 1601.16 3872.58 00:16:53.636 [2024-12-06T11:13:04.783Z] =================================================================================================================== 00:16:53.636 [2024-12-06T11:13:04.783Z] Total : 6878.27 859.78 0.00 0.00 2320.93 1601.16 3872.58 00:16:53.636 0 00:16:53.636 11:13:04 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:53.636 11:13:04 -- host/digest.sh@92 -- # get_accel_stats 00:16:53.636 11:13:04 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:53.636 11:13:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:53.636 11:13:04 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:53.636 | select(.opcode=="crc32c") 00:16:53.636 | "\(.module_name) \(.executed)"' 00:16:53.895 11:13:04 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:53.895 11:13:04 -- host/digest.sh@93 -- # exp_module=software 00:16:53.895 11:13:04 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:53.895 11:13:04 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:53.895 11:13:04 -- host/digest.sh@97 -- # killprocess 83683 00:16:53.895 11:13:04 -- common/autotest_common.sh@936 -- # '[' -z 83683 ']' 00:16:53.895 11:13:04 -- common/autotest_common.sh@940 -- # kill -0 83683 00:16:53.895 11:13:04 -- common/autotest_common.sh@941 -- # uname 00:16:53.895 11:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.895 11:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83683 00:16:53.895 killing process with pid 83683 00:16:53.895 Received shutdown signal, test time was about 2.000000 seconds 00:16:53.895 00:16:53.895 Latency(us) 00:16:53.895 [2024-12-06T11:13:05.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.895 [2024-12-06T11:13:05.042Z] =================================================================================================================== 00:16:53.895 [2024-12-06T11:13:05.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.895 11:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.895 11:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.895 11:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83683' 00:16:53.895 11:13:04 -- common/autotest_common.sh@955 -- # kill 83683 00:16:53.895 11:13:04 -- common/autotest_common.sh@960 -- # wait 83683 00:16:54.152 11:13:05 -- host/digest.sh@126 -- # killprocess 83494 00:16:54.152 11:13:05 -- common/autotest_common.sh@936 -- # '[' -z 83494 ']' 00:16:54.152 11:13:05 -- common/autotest_common.sh@940 -- # kill -0 83494 00:16:54.152 11:13:05 -- common/autotest_common.sh@941 -- # uname 00:16:54.152 11:13:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.152 11:13:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83494 00:16:54.152 killing process with pid 83494 00:16:54.152 11:13:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.152 11:13:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.152 11:13:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83494' 00:16:54.152 11:13:05 -- common/autotest_common.sh@955 -- # kill 83494 00:16:54.152 11:13:05 -- common/autotest_common.sh@960 -- # wait 83494 00:16:54.409 00:16:54.409 real 0m15.392s 00:16:54.409 user 0m30.022s 00:16:54.409 sys 0m4.418s 00:16:54.409 11:13:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:54.409 ************************************ 00:16:54.409 END TEST nvmf_digest_clean 00:16:54.409 ************************************ 00:16:54.409 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.409 11:13:05 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:54.409 11:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:54.409 11:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.409 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.409 ************************************ 00:16:54.410 START TEST nvmf_digest_error 00:16:54.410 ************************************ 00:16:54.410 11:13:05 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:54.410 11:13:05 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:54.410 11:13:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:54.410 11:13:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.410 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.410 11:13:05 -- nvmf/common.sh@469 -- # nvmfpid=83757 00:16:54.410 11:13:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:54.410 11:13:05 -- nvmf/common.sh@470 -- # waitforlisten 83757 00:16:54.410 11:13:05 -- common/autotest_common.sh@829 -- # '[' -z 83757 ']' 00:16:54.410 11:13:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.410 11:13:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.410 11:13:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.410 11:13:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.410 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.410 [2024-12-06 11:13:05.414065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:54.410 [2024-12-06 11:13:05.414339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.667 [2024-12-06 11:13:05.556573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.667 [2024-12-06 11:13:05.587928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.667 [2024-12-06 11:13:05.588329] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.667 [2024-12-06 11:13:05.588387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.667 [2024-12-06 11:13:05.588604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.667 [2024-12-06 11:13:05.588855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.667 11:13:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.667 11:13:05 -- common/autotest_common.sh@862 -- # return 0 00:16:54.667 11:13:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:54.667 11:13:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.667 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.667 11:13:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.667 11:13:05 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:54.667 11:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.667 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.667 [2024-12-06 11:13:05.709250] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:54.667 11:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.667 11:13:05 -- host/digest.sh@104 -- # common_target_config 00:16:54.667 11:13:05 -- host/digest.sh@43 -- # rpc_cmd 00:16:54.667 11:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.667 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.667 null0 00:16:54.667 [2024-12-06 11:13:05.777241] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.667 [2024-12-06 11:13:05.801348] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.667 11:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.667 11:13:05 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:54.667 11:13:05 -- host/digest.sh@54 -- # local rw bs qd 00:16:54.667 11:13:05 -- host/digest.sh@56 -- # rw=randread 00:16:54.667 11:13:05 -- host/digest.sh@56 -- # bs=4096 00:16:54.667 11:13:05 -- host/digest.sh@56 -- # qd=128 00:16:54.668 11:13:05 -- host/digest.sh@58 -- # bperfpid=83782 00:16:54.668 11:13:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:54.668 11:13:05 -- host/digest.sh@60 -- # waitforlisten 83782 /var/tmp/bperf.sock 00:16:54.668 11:13:05 -- common/autotest_common.sh@829 -- # '[' -z 83782 ']' 00:16:54.668 11:13:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:54.668 11:13:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.668 11:13:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:54.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:54.668 11:13:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.668 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 [2024-12-06 11:13:05.856727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:54.926 [2024-12-06 11:13:05.857016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83782 ] 00:16:54.926 [2024-12-06 11:13:05.995231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.926 [2024-12-06 11:13:06.034713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.195 11:13:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.195 11:13:06 -- common/autotest_common.sh@862 -- # return 0 00:16:55.195 11:13:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:55.195 11:13:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:55.472 11:13:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:55.472 11:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.472 11:13:06 -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 11:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.473 11:13:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.473 11:13:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.732 nvme0n1 00:16:55.732 11:13:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:55.732 11:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.732 11:13:06 -- common/autotest_common.sh@10 -- # set +x 00:16:55.732 11:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.732 11:13:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:55.732 11:13:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.992 Running I/O for 2 seconds... 00:16:55.992 [2024-12-06 11:13:06.909677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.909744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.909766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.924680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.924717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.924746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.939404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.939472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.954156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.954193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.954222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.969437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.969493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.969524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.985122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:06.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:06.985188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:06.999938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.000145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:07.000163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:07.014829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.015024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:07.015041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:07.030154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.030193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:07.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:07.047178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.047217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:07.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:07.064229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.064428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.992 [2024-12-06 11:13:07.064446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.992 [2024-12-06 11:13:07.082658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.992 [2024-12-06 11:13:07.082708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.993 [2024-12-06 11:13:07.082776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.993 [2024-12-06 11:13:07.101685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.993 [2024-12-06 11:13:07.101790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.993 [2024-12-06 11:13:07.101820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.993 [2024-12-06 11:13:07.120939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:55.993 [2024-12-06 11:13:07.121000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.993 [2024-12-06 11:13:07.121022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.139601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.139671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.139724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.157033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.157074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.157119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.173505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.173572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.173603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.189080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.189118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.203819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.203855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.203884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.218584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.218620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.218649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.234584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.234621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.234651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.249528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.249592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.249623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.264339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.264592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.264613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.280032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.280261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.280391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.297540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.297803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.297968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.314325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.314546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.314675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.330743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.330956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.331092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.346726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.346923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.363001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.363214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.363383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.379930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.253 [2024-12-06 11:13:07.380182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.253 [2024-12-06 11:13:07.380318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.253 [2024-12-06 11:13:07.397395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.254 [2024-12-06 11:13:07.397650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.397779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.414567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.414805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.414951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.430728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.430935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.431077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.446694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.446907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.447030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.462555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.462764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.462788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.478109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.478297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.478314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.494600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.494635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.494665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.509521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.509601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.509631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.524525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.524749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.524767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.539870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.540075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.555027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.555212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.555230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.570352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.570524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.570573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.585530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.585611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.585643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.601725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.601781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.601811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.617483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.617565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.632446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.632667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.513 [2024-12-06 11:13:07.647733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.513 [2024-12-06 11:13:07.647905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.513 [2024-12-06 11:13:07.647923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.773 [2024-12-06 11:13:07.664070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.773 [2024-12-06 11:13:07.664106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.773 [2024-12-06 11:13:07.664135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.773 [2024-12-06 11:13:07.679058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.773 [2024-12-06 11:13:07.679094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.773 [2024-12-06 11:13:07.679123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.773 [2024-12-06 11:13:07.694077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.773 [2024-12-06 11:13:07.694113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.773 [2024-12-06 11:13:07.694142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.773 [2024-12-06 11:13:07.709033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.773 [2024-12-06 11:13:07.709069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.773 [2024-12-06 11:13:07.709097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.723893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.723958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.738772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.738979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.738996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.755167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.755203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.770537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.770613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.770627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.787688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.787849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.787883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.805761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.805809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.822240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.822307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.837127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.837163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.837193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.852538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.852783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.852803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.870456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.870499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.870530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.887142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.887179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.887209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:56.774 [2024-12-06 11:13:07.902007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:56.774 [2024-12-06 11:13:07.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.774 [2024-12-06 11:13:07.902073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.033 [2024-12-06 11:13:07.924674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.033 [2024-12-06 11:13:07.924712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:07.924744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:07.939924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:07.940116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:07.940134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:07.954966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:07.955155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:07.955173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:07.969974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:07.970162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:07.970180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:07.985038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:07.985228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:07.985245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.001245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.001295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.001325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.018102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.018154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.018184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.033185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.033221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.033251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.048197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.048389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.048406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.063446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.063672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.063690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.078424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.078462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.078491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.093419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.093456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.108345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.108382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.108413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.123113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.123150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.123179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.137934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.137971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.138001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.152813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.152852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.152880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.034 [2024-12-06 11:13:08.168676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.034 [2024-12-06 11:13:08.168876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.034 [2024-12-06 11:13:08.168895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.185938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.185979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.186010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.203990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.204244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.204262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.222926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.222985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.223023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.241541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.241607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.241639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.260702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.260783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.260817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.278907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.278966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.278998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.294992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.295031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.295061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.310778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.310816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.310845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.325907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.325944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.325973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.340768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.340809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.340839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.355614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.355838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.370717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.370904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.370922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.385851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.386056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.400831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.401006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.401023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.416472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.416678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.416712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.295 [2024-12-06 11:13:08.433496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.295 [2024-12-06 11:13:08.433581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.295 [2024-12-06 11:13:08.433613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.451048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.451102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.451132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.467210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.467271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.467303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.483073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.483111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.498914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.498968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.498997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.515844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.516038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.533694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.533873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.533890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.549610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.549647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.549677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.564432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.564656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.564674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.579396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.579434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.579463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.594262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.594298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.594328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.609122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.609159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.555 [2024-12-06 11:13:08.609188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.555 [2024-12-06 11:13:08.624080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.555 [2024-12-06 11:13:08.624118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.556 [2024-12-06 11:13:08.624148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.556 [2024-12-06 11:13:08.639128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.556 [2024-12-06 11:13:08.639330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.556 [2024-12-06 11:13:08.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.556 [2024-12-06 11:13:08.654328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.556 [2024-12-06 11:13:08.654518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.556 [2024-12-06 11:13:08.654548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.556 [2024-12-06 11:13:08.669592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.556 [2024-12-06 11:13:08.669628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.556 [2024-12-06 11:13:08.669657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.556 [2024-12-06 11:13:08.684529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.556 [2024-12-06 11:13:08.684754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.556 [2024-12-06 11:13:08.684772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.815 [2024-12-06 11:13:08.700325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.815 [2024-12-06 11:13:08.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.815 [2024-12-06 11:13:08.700506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.815 [2024-12-06 11:13:08.717484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.815 [2024-12-06 11:13:08.717523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.815 [2024-12-06 11:13:08.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.733689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.733878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.733896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.748829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.748895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.763488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.763527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.763586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.778192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.778227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.778256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.794344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.794566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.794585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.811747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.811944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.811963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.828284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.828320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.828350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.843665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.843701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.858493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.858530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.858588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.874256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.874295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.874324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 [2024-12-06 11:13:08.892048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8540b0) 00:16:57.816 [2024-12-06 11:13:08.892272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.816 [2024-12-06 11:13:08.892291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:57.816 00:16:57.816 Latency(us) 00:16:57.816 [2024-12-06T11:13:08.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.816 [2024-12-06T11:13:08.963Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:57.816 nvme0n1 : 2.01 15878.87 62.03 0.00 0.00 8054.53 7089.80 30265.72 00:16:57.816 [2024-12-06T11:13:08.963Z] =================================================================================================================== 00:16:57.816 [2024-12-06T11:13:08.963Z] Total : 15878.87 62.03 0.00 0.00 8054.53 7089.80 30265.72 00:16:57.816 0 00:16:57.816 11:13:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:57.816 11:13:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:57.816 11:13:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:57.816 11:13:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:57.816 | .driver_specific 00:16:57.816 | .nvme_error 00:16:57.816 | .status_code 00:16:57.816 | .command_transient_transport_error' 00:16:58.076 11:13:09 -- host/digest.sh@71 -- # (( 125 > 0 )) 00:16:58.076 11:13:09 -- host/digest.sh@73 -- # killprocess 83782 00:16:58.076 11:13:09 -- common/autotest_common.sh@936 -- # '[' -z 83782 ']' 00:16:58.076 11:13:09 -- common/autotest_common.sh@940 -- # kill -0 83782 00:16:58.076 11:13:09 -- common/autotest_common.sh@941 -- # uname 00:16:58.076 11:13:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.076 11:13:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83782 00:16:58.076 killing process with pid 83782 00:16:58.076 Received shutdown signal, test time was about 2.000000 seconds 00:16:58.076 00:16:58.076 Latency(us) 00:16:58.076 [2024-12-06T11:13:09.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.076 [2024-12-06T11:13:09.223Z] =================================================================================================================== 00:16:58.076 [2024-12-06T11:13:09.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.076 11:13:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:58.076 11:13:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:58.076 11:13:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83782' 00:16:58.076 11:13:09 -- common/autotest_common.sh@955 -- # kill 83782 00:16:58.076 11:13:09 -- common/autotest_common.sh@960 -- # wait 83782 00:16:58.335 11:13:09 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:58.335 11:13:09 -- host/digest.sh@54 -- # local rw bs qd 00:16:58.335 11:13:09 -- host/digest.sh@56 -- # rw=randread 00:16:58.335 11:13:09 -- host/digest.sh@56 -- # bs=131072 00:16:58.335 11:13:09 -- host/digest.sh@56 -- # qd=16 00:16:58.335 11:13:09 -- host/digest.sh@58 -- # bperfpid=83829 00:16:58.335 11:13:09 -- host/digest.sh@60 -- # waitforlisten 83829 /var/tmp/bperf.sock 00:16:58.335 11:13:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:58.335 11:13:09 -- common/autotest_common.sh@829 -- # '[' -z 83829 ']' 00:16:58.335 11:13:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:58.335 11:13:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.335 11:13:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:58.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:58.335 11:13:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.335 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:16:58.335 [2024-12-06 11:13:09.396609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:58.335 [2024-12-06 11:13:09.396922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83829 ] 00:16:58.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:58.335 Zero copy mechanism will not be used. 00:16:58.594 [2024-12-06 11:13:09.536530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.594 [2024-12-06 11:13:09.570376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.594 11:13:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.594 11:13:09 -- common/autotest_common.sh@862 -- # return 0 00:16:58.594 11:13:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.594 11:13:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.854 11:13:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.854 11:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.854 11:13:09 -- common/autotest_common.sh@10 -- # set +x 00:16:58.854 11:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.854 11:13:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.854 11:13:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:59.113 nvme0n1 00:16:59.113 11:13:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:59.113 11:13:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.113 11:13:10 -- common/autotest_common.sh@10 -- # set +x 00:16:59.113 11:13:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.113 11:13:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:59.113 11:13:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:59.375 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:59.375 Zero copy mechanism will not be used. 00:16:59.375 Running I/O for 2 seconds... 00:16:59.375 [2024-12-06 11:13:10.337219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.337288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.337318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.341811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.341852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.341897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.345864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.345901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.345930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.349921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.349989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.354031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.354068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.354098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.358039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.358077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.358106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.361972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.362008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.362037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.366028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.366065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.366094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.370102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.370167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.374215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.374252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.374280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.378195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.378232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.382223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.382261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.382289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.386226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.386264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.386293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.390247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.390284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.390313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.394338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.394377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.394405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.398250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.398287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.398316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.402538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.402600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.406636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.406679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.410823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.410863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.410892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.414895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.414933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.414961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.418903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.418940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.418968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.422807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.422871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.427029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.427116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.375 [2024-12-06 11:13:10.427140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.375 [2024-12-06 11:13:10.431993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.375 [2024-12-06 11:13:10.432209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.432243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.436625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.436676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.436704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.440953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.441021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.445580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.445646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.445678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.450099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.450138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.450167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.455352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.455582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.455610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.460228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.460436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.460605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.465306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.465520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.465757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.470315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.470520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.470748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.474942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.475138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.475315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.480019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.480241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.480441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.484904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.485115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.485272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.489500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.489735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.489953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.494399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.494635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.494839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.499190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.499411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.499564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.504001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.504079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.504104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.508352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.508391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.508422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.512609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.512646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.512675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.376 [2024-12-06 11:13:10.517329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.376 [2024-12-06 11:13:10.517403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-12-06 11:13:10.517427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.521811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.521850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.521879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.526223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.526418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.526454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.530995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.531034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.531063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.535108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.535146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.535175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.539401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.539443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.539458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.543961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.544030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.548124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.548162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.638 [2024-12-06 11:13:10.548191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.638 [2024-12-06 11:13:10.552303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.638 [2024-12-06 11:13:10.552341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.552370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.556844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.556883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.556913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.561098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.561136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.561165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.565268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.565306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.565335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.569913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.569953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.569982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.574084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.574276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.574310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.578497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.578560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.578575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.583100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.583140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.583170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.587308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.587353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.591551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.591649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.591678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.596370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.596442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.601102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.601154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.601185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.605514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.605577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.605593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.609667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.609704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.609733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.613701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.613738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.613766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.617801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.617837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.617866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.621718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.621754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.621782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.625726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.625763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.625792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.629699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.629735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.629763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.633856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.633893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.633923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.638383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.638421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.638450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.643308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.643351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.648231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.648422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.648456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.653024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.653065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.653109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.657428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.657466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.661703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.661763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.661799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.665961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.639 [2024-12-06 11:13:10.666000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.639 [2024-12-06 11:13:10.666029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.639 [2024-12-06 11:13:10.670456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.670493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.670522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.674889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.674928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.674957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.679129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.679167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.679195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.683437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.683493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.683508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.687794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.687839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.687867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.691782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.691997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.692032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.696061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.696102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.696131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.700014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.700051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.700080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.704021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.704066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.704094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.708157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.708201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.708230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.712261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.712304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.716261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.716297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.716326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.720241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.720278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.720306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.724340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.724389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.724417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.728408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.728456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.728486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.732487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.732531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.732589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.736471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.736512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.736542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.740533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.744587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.744637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.744666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.748557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.748624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.748654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.752821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.752860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.752873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.757060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.757099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.757112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.761524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.761636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.640 [2024-12-06 11:13:10.761653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.640 [2024-12-06 11:13:10.765931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.640 [2024-12-06 11:13:10.765987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.641 [2024-12-06 11:13:10.766016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.641 [2024-12-06 11:13:10.770354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.641 [2024-12-06 11:13:10.770398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.641 [2024-12-06 11:13:10.770427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.641 [2024-12-06 11:13:10.774791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.641 [2024-12-06 11:13:10.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.641 [2024-12-06 11:13:10.774862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.641 [2024-12-06 11:13:10.779355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.641 [2024-12-06 11:13:10.779398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.641 [2024-12-06 11:13:10.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.783946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.783984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.784013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.788334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.788373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.788402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.792791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.792837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.792865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.797022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.797066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.797095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.801149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.801189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.801218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.805694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.805769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.805804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.809901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.809938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.809966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.813930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.813966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.813995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.818188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.818254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.822278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.822315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.822344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.826416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.826452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.826481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.831097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.831135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.831164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.835625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.835677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.835691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.840080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.840118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.840147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.844633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.844699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.849495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.849676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.849695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.854340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.854380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.859435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.859481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.859496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.864698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.864807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.864834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.870188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.870258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.875881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.875926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.875942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.880684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.880742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.880759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.885280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.885319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.885349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.890441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.890524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.895372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.895416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.903 [2024-12-06 11:13:10.895432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.903 [2024-12-06 11:13:10.901151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.903 [2024-12-06 11:13:10.901220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.901270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.907049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.907137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.907166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.911878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.911935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.911965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.917373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.917452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.917478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.922172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.922212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.922241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.926919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.926964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.926979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.931629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.931666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.936331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.936370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.936399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.940893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.940933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.940963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.945228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.945265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.949684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.949720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.949749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.953760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.953825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.957844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.957881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.957909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.961903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.961940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.961969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.966044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.966082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.966111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.970148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.970215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.970239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.974439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.974479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.978588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.978624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.978654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.982619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.982684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.986783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.986822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.986851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.990762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.990799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.990828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.994702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.994783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:10.998665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:10.998701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:10.998745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:11.002681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:11.002717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:11.002761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:11.006813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:11.006850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:11.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:11.011039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:11.011077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:11.011106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:11.015203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:11.015261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.904 [2024-12-06 11:13:11.015291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.904 [2024-12-06 11:13:11.019693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.904 [2024-12-06 11:13:11.019745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.019758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.023806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.023842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.023855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.027819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.027871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.027884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.031857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.031892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.031922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.035731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.035765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.035793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.039710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.039773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.905 [2024-12-06 11:13:11.043990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:16:59.905 [2024-12-06 11:13:11.044040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.905 [2024-12-06 11:13:11.044069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.048252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.048290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.052574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.052624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.056813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.056862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.060742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.060777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.060790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.064575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.064619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.064633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.068425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.068461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.068489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.072406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.072442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.072470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.076349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.076413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.080225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.080261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.080289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.084126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.084161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.084190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.087949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.087983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.166 [2024-12-06 11:13:11.088012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.166 [2024-12-06 11:13:11.091760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.166 [2024-12-06 11:13:11.091795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.091822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.095645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.095678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.095706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.099520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.099600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.103534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.103610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.103625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.107550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.107653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.107682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.111721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.111755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.111783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.116020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.116057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.120885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.120983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.121015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.125503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.125566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.129794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.129830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.129843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.134121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.134163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.134191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.138252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.138295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.138324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.142409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.142446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.142475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.146450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.146487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.146516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.150413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.150448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.150476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.154525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.154626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.154641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.158551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.158641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.162476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.162511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.162539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.166426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.166462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.166490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.171094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.171183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.175506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.175599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.175631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.179737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.179776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.179804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.183887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.183925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.183954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.188037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.188077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.188106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.192100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.192138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.192167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.196236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.196271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.196300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.200426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.200465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.200494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.204462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.204500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.204529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.208478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.208516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.208544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.167 [2024-12-06 11:13:11.212417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.167 [2024-12-06 11:13:11.212610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.167 [2024-12-06 11:13:11.212643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.216660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.216696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.216724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.220655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.220690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.220718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.224609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.224641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.224653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.228535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.228751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.228785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.232833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.232871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.232900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.236780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.236815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.236843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.240793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.240831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.240860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.244872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.244907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.244935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.248845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.248881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.248909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.252851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.252887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.252916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.256887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.256921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.256949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.260850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.260886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.260914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.264689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.264722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.264751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.268712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.268746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.268774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.272731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.272766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.272779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.276707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.276741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.276770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.280687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.280721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.280749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.284672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.284750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.288635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.288670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.288697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.292544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.292588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.292616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.296525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.300798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.300834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.300862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.168 [2024-12-06 11:13:11.304749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.168 [2024-12-06 11:13:11.304784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.168 [2024-12-06 11:13:11.304813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.309132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.309168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.309199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.313285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.313321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.313349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.317500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.317565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.317595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.321502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.321560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.321574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.325528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.325573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.325602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.329484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.329519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.329547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.333523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.333566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.333595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.337452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.337487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.337515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.341495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.341531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.341568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.345421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.345456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.349377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.349413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.349440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.353435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.353470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.353499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.357475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.357510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.357538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.361430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.361466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.361494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.365509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.365572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.365601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.369566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.369629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.369658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.373691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.373727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.373756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.377968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.378005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.382234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.382269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.382297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.386255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.386291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.386319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.390217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.390251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.429 [2024-12-06 11:13:11.390279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.429 [2024-12-06 11:13:11.394266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.429 [2024-12-06 11:13:11.394302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.394331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.398191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.398225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.398254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.402213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.402248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.402276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.406381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.406432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.406460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.410467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.410502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.410529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.414516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.414577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.414607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.418508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.418568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.418596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.422410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.422444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.422473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.426339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.426373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.426401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.430291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.430326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.430354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.434285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.434322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.434350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.438310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.438345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.438373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.442334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.442371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.442400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.446442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.446478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.446506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.450496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.450532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.450588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.454423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.454457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.454485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.458463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.458499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.458527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.462426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.462461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.462489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.466467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.466502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.466529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.470465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.470500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.470529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.474426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.474462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.474489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.478451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.478486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.478515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.482403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.482438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.482466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.486447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.486483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.486511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.490410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.490445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.490474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.494407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.494458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.494486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.498385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.498449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.502407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.502443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.502472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.506329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.506364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.506392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.430 [2024-12-06 11:13:11.510366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.430 [2024-12-06 11:13:11.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.430 [2024-12-06 11:13:11.510429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.514336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.514371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.518256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.518319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.522270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.522306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.526258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.526293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.526321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.530268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.530303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.530332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.534295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.534330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.534358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.538368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.538405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.538433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.542361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.542396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.542424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.546339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.546374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.546402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.550343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.550378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.550406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.554289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.554325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.554353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.558314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.558351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.558379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.562464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.562499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.562527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.566444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.566479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.566507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.431 [2024-12-06 11:13:11.570807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.431 [2024-12-06 11:13:11.570844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.431 [2024-12-06 11:13:11.570872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.575066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.575103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.575144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.579368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.579409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.579424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.583491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.583534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.583580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.587616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.587665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.587693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.591445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.591678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.591710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.595815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.595851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.595879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.599766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.599801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.599830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.604081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.604132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.604160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.608770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.608811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.608826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.613448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.613485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.613515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.617979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.618019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.618048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.622540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.622604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.622634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.627271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.627314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.631668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.631705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.631746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.636533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.636624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.640988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.641037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.641068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.645246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.645284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.645313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.649554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.649676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.649703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.653875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.653915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.653945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.658096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.658291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.658325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.662676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.662715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.662744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.666992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.667034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.667064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.671306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.671349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.692 [2024-12-06 11:13:11.671365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.692 [2024-12-06 11:13:11.675454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.692 [2024-12-06 11:13:11.675497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.675512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.679837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.679877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.679907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.683893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.683929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.683958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.688027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.688064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.688093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.693011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.693051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.693081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.697218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.697409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.697443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.701822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.701875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.701905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.706129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.706169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.706199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.710348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.710386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.710415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.714553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.714589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.714617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.718719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.718758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.718787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.722757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.722795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.722824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.726825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.726863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.731156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.731195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.735413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.735455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.735470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.739670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.739706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.739735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.744084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.744125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.744155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.748262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.748299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.748329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.752405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.752442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.752471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.757085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.757125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.757155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.761330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.761367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.761396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.765741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.765778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.769830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.773836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.773874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.773887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.777922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.777972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.778000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.782049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.782086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.782115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.786139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.786175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.693 [2024-12-06 11:13:11.786205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.693 [2024-12-06 11:13:11.790365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.693 [2024-12-06 11:13:11.790401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.790430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.794430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.794465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.794494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.798501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.798741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.798776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.802778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.802815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.802845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.806806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.806842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.806887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.810865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.810902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.810931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.814844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.814879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.814907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.818868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.818905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.818934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.822929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.822965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.822995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.827039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.827075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.827104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.694 [2024-12-06 11:13:11.831807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.694 [2024-12-06 11:13:11.831849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.694 [2024-12-06 11:13:11.831894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.836615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.836695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.836741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.841031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.841069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.841098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.845380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.845417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.845445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.849765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.849802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.849831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.854085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.854122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.854151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.858394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.862873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.862914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.862929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.867436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.867478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.872140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.872182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.872196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.876643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.876681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.876694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.881207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.881396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.881431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.885780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.954 [2024-12-06 11:13:11.885821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.954 [2024-12-06 11:13:11.885835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.954 [2024-12-06 11:13:11.890084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.890121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.890150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.894272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.894309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.894337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.898437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.898473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.898501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.902512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.902593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.902608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.906613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.906648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.906677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.910793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.910830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.910859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.915080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.915128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.915157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.919720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.919759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.919773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.924083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.924121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.924150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.928695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.928750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.928766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.933356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.933395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.933424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.938614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.938657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.938687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.943385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.948431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.948494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.953045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.953129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.953160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.957536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.957618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.957634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.962170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.962365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.962400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.966865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.966904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.966933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.971120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.971158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.971187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.975520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.975594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.975610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.979714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.979749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.979762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.983843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.983880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.983894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.988212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.988249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.988278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.992421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.992458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.992486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:11.996630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:11.996665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:11.996693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:12.001028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:12.001065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:12.001093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:12.005234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.955 [2024-12-06 11:13:12.005271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.955 [2024-12-06 11:13:12.005300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.955 [2024-12-06 11:13:12.009510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.009574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.009604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.013896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.013964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.018027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.018065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.018094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.022093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.022145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.022159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.026319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.026358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.026387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.030411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.030447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.030477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.034617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.034655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.034668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.038981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.039019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.039049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.043173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.043246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.047576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.047630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.047660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.051792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.051829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.051843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.055868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.055906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.055949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.060216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.060254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.060283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.064407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.064443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.064472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.068541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.068602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.072880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.072936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.072950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.077133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.077169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.077198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.081263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.081299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.085394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.085430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.085458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.089798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.089837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.089868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-12-06 11:13:12.094362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:00.956 [2024-12-06 11:13:12.094415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-12-06 11:13:12.094444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.216 [2024-12-06 11:13:12.099227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.216 [2024-12-06 11:13:12.099306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.216 [2024-12-06 11:13:12.099322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.216 [2024-12-06 11:13:12.103705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.216 [2024-12-06 11:13:12.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.216 [2024-12-06 11:13:12.103794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.108028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.108066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.108110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.112163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.112199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.116270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.116306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.116334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.120404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.120467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.124516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.124578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.124608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.128589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.128638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.128653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.132693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.132728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.136678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.136710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.140617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.140652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.140665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.144653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.144688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.144702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.148607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.148652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.148665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.152570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.152619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.152649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.156672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.156707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.156720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.160735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.160790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.164772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.164807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.168764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.168800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.168812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.172787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.172823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.172836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.176823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.176858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.176871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.180873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.180909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.184931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.185010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.189082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.189117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.193096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.193132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.193159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.197184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.197220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.197249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.201628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.201664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.201692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.205976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.206012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.206041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.210079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.210115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.210143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.217 [2024-12-06 11:13:12.214195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.217 [2024-12-06 11:13:12.214231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.217 [2024-12-06 11:13:12.214260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.218338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.218403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.222401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.222437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.222464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.226509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.226571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.226601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.230593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.230627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.230655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.234571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.234605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.234633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.238593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.238627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.238655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.242593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.242628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.242656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.246653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.246687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.246716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.250688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.250722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.250749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.254698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.254732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.254760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.258749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.258784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.258812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.262706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.262740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.262768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.266755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.266792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.266821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.270775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.270810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.270838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.274770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.274806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.274834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.278744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.278779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.278808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.282740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.282774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.286790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.286826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.286854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.290808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.290843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.290872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.294850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.294885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.294914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.298907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.298942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.302901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.302937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.302966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.306946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.306981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.307009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.311001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.311037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.311066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.315155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.315190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.315219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.319886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.319939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.319976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.218 [2024-12-06 11:13:12.324344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.218 [2024-12-06 11:13:12.324386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.218 [2024-12-06 11:13:12.324416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.219 [2024-12-06 11:13:12.328597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5680) 00:17:01.219 [2024-12-06 11:13:12.328635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.219 [2024-12-06 11:13:12.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.219 00:17:01.219 Latency(us) 00:17:01.219 [2024-12-06T11:13:12.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.219 [2024-12-06T11:13:12.366Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:01.219 nvme0n1 : 2.00 7291.32 911.42 0.00 0.00 2191.44 1690.53 6345.08 00:17:01.219 [2024-12-06T11:13:12.366Z] =================================================================================================================== 00:17:01.219 [2024-12-06T11:13:12.366Z] Total : 7291.32 911.42 0.00 0.00 2191.44 1690.53 6345.08 00:17:01.219 0 00:17:01.219 11:13:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:01.219 11:13:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:01.219 | .driver_specific 00:17:01.219 | .nvme_error 00:17:01.219 | .status_code 00:17:01.219 | .command_transient_transport_error' 00:17:01.219 11:13:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:01.219 11:13:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:01.787 11:13:12 -- host/digest.sh@71 -- # (( 470 > 0 )) 00:17:01.787 11:13:12 -- host/digest.sh@73 -- # killprocess 83829 00:17:01.787 11:13:12 -- common/autotest_common.sh@936 -- # '[' -z 83829 ']' 00:17:01.787 11:13:12 -- common/autotest_common.sh@940 -- # kill -0 83829 00:17:01.787 11:13:12 -- common/autotest_common.sh@941 -- # uname 00:17:01.787 11:13:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.787 11:13:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83829 00:17:01.787 killing process with pid 83829 00:17:01.787 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.787 00:17:01.787 Latency(us) 00:17:01.787 [2024-12-06T11:13:12.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.787 [2024-12-06T11:13:12.934Z] =================================================================================================================== 00:17:01.787 [2024-12-06T11:13:12.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.787 11:13:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:01.787 11:13:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:01.787 11:13:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83829' 00:17:01.787 11:13:12 -- common/autotest_common.sh@955 -- # kill 83829 00:17:01.787 11:13:12 -- common/autotest_common.sh@960 -- # wait 83829 00:17:01.787 11:13:12 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:01.787 11:13:12 -- host/digest.sh@54 -- # local rw bs qd 00:17:01.787 11:13:12 -- host/digest.sh@56 -- # rw=randwrite 00:17:01.787 11:13:12 -- host/digest.sh@56 -- # bs=4096 00:17:01.787 11:13:12 -- host/digest.sh@56 -- # qd=128 00:17:01.787 11:13:12 -- host/digest.sh@58 -- # bperfpid=83876 00:17:01.787 11:13:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:01.787 11:13:12 -- host/digest.sh@60 -- # waitforlisten 83876 /var/tmp/bperf.sock 00:17:01.787 11:13:12 -- common/autotest_common.sh@829 -- # '[' -z 83876 ']' 00:17:01.787 11:13:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.787 11:13:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.787 11:13:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.787 11:13:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.787 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:01.787 [2024-12-06 11:13:12.855023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:01.787 [2024-12-06 11:13:12.855319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83876 ] 00:17:02.046 [2024-12-06 11:13:12.991747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.046 [2024-12-06 11:13:13.024124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.046 11:13:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.046 11:13:13 -- common/autotest_common.sh@862 -- # return 0 00:17:02.046 11:13:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.046 11:13:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.305 11:13:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:02.305 11:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.306 11:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:02.306 11:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.306 11:13:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.306 11:13:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.565 nvme0n1 00:17:02.565 11:13:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:02.565 11:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.565 11:13:13 -- common/autotest_common.sh@10 -- # set +x 00:17:02.565 11:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.565 11:13:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:02.565 11:13:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.824 Running I/O for 2 seconds... 00:17:02.824 [2024-12-06 11:13:13.779456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ddc00 00:17:02.824 [2024-12-06 11:13:13.780943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.780987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.794178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fef90 00:17:02.824 [2024-12-06 11:13:13.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.808605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ff3c8 00:17:02.824 [2024-12-06 11:13:13.809957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.822896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190feb58 00:17:02.824 [2024-12-06 11:13:13.824226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.824260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.837427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fe720 00:17:02.824 [2024-12-06 11:13:13.838827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.838861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.852056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fe2e8 00:17:02.824 [2024-12-06 11:13:13.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.824 [2024-12-06 11:13:13.853571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:02.824 [2024-12-06 11:13:13.866631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fdeb0 00:17:02.825 [2024-12-06 11:13:13.867956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.867991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.882159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fda78 00:17:02.825 [2024-12-06 11:13:13.883470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.883512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.898246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fd640 00:17:02.825 [2024-12-06 11:13:13.899793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.899834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.915354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fd208 00:17:02.825 [2024-12-06 11:13:13.916686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.916752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.931706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fcdd0 00:17:02.825 [2024-12-06 11:13:13.933128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.933176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.947184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fc998 00:17:02.825 [2024-12-06 11:13:13.948505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.948753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.825 [2024-12-06 11:13:13.962750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fc560 00:17:02.825 [2024-12-06 11:13:13.964115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.825 [2024-12-06 11:13:13.964307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:13.979542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fc128 00:17:03.085 [2024-12-06 11:13:13.980882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:13.980925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:13.996543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fbcf0 00:17:03.085 [2024-12-06 11:13:13.998001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:13.998247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.012439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fb8b8 00:17:03.085 [2024-12-06 11:13:14.013821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.013854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.027957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fb480 00:17:03.085 [2024-12-06 11:13:14.029421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.029461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.043032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fb048 00:17:03.085 [2024-12-06 11:13:14.044404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.044436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.058065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fac10 00:17:03.085 [2024-12-06 11:13:14.059313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.059353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.072960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fa7d8 00:17:03.085 [2024-12-06 11:13:14.074136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.074173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.088054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190fa3a0 00:17:03.085 [2024-12-06 11:13:14.089411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.089442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.102790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f9f68 00:17:03.085 [2024-12-06 11:13:14.104212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.104259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.120022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f9b30 00:17:03.085 [2024-12-06 11:13:14.121283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.121318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.135788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f96f8 00:17:03.085 [2024-12-06 11:13:14.137206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.137237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.150460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f92c0 00:17:03.085 [2024-12-06 11:13:14.151736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.151771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.164843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f8e88 00:17:03.085 [2024-12-06 11:13:14.165960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.165995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.179215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f8a50 00:17:03.085 [2024-12-06 11:13:14.180407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.180441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.193784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f8618 00:17:03.085 [2024-12-06 11:13:14.194887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.194920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.208172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f81e0 00:17:03.085 [2024-12-06 11:13:14.209414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.085 [2024-12-06 11:13:14.209441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:03.085 [2024-12-06 11:13:14.224252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f7da8 00:17:03.086 [2024-12-06 11:13:14.225408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.086 [2024-12-06 11:13:14.225444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:03.345 [2024-12-06 11:13:14.240740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f7970 00:17:03.345 [2024-12-06 11:13:14.241946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.345 [2024-12-06 11:13:14.241980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:03.345 [2024-12-06 11:13:14.256350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f7538 00:17:03.345 [2024-12-06 11:13:14.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.345 [2024-12-06 11:13:14.257520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:03.345 [2024-12-06 11:13:14.271956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f7100 00:17:03.345 [2024-12-06 11:13:14.273185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.345 [2024-12-06 11:13:14.273214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-12-06 11:13:14.287887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f6cc8 00:17:03.346 [2024-12-06 11:13:14.288981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.289016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.302881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f6890 00:17:03.346 [2024-12-06 11:13:14.303992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.304188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.319273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f6458 00:17:03.346 [2024-12-06 11:13:14.320388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.320423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.336612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f6020 00:17:03.346 [2024-12-06 11:13:14.337839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.337876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.353881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f5be8 00:17:03.346 [2024-12-06 11:13:14.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.354961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.370539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f57b0 00:17:03.346 [2024-12-06 11:13:14.371703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.371776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.387655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f5378 00:17:03.346 [2024-12-06 11:13:14.388856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.388920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.404348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f4f40 00:17:03.346 [2024-12-06 11:13:14.405379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.405418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.419434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f4b08 00:17:03.346 [2024-12-06 11:13:14.420501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.420721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.433977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f46d0 00:17:03.346 [2024-12-06 11:13:14.434916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.434952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.448456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f4298 00:17:03.346 [2024-12-06 11:13:14.449581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.449632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.463078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f3e60 00:17:03.346 [2024-12-06 11:13:14.464078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.464292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:03.346 [2024-12-06 11:13:14.477741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f3a28 00:17:03.346 [2024-12-06 11:13:14.478632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.346 [2024-12-06 11:13:14.478667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.492904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f35f0 00:17:03.606 [2024-12-06 11:13:14.493890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.493958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.507655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f31b8 00:17:03.606 [2024-12-06 11:13:14.508700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.508739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.522431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f2d80 00:17:03.606 [2024-12-06 11:13:14.523414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.523452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.537835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f2948 00:17:03.606 [2024-12-06 11:13:14.538714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.538924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.552494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f2510 00:17:03.606 [2024-12-06 11:13:14.553574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.553626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.567175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f20d8 00:17:03.606 [2024-12-06 11:13:14.568157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.568205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.581843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f1ca0 00:17:03.606 [2024-12-06 11:13:14.582643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.582711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.597769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f1868 00:17:03.606 [2024-12-06 11:13:14.598600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.613462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f1430 00:17:03.606 [2024-12-06 11:13:14.614391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.614426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.627955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f0ff8 00:17:03.606 [2024-12-06 11:13:14.629087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.629134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.643160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f0bc0 00:17:03.606 [2024-12-06 11:13:14.644050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.644252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.658089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f0788 00:17:03.606 [2024-12-06 11:13:14.659108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.659137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.672851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190f0350 00:17:03.606 [2024-12-06 11:13:14.673834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.673896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.687538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eff18 00:17:03.606 [2024-12-06 11:13:14.688332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.688370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.702065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190efae0 00:17:03.606 [2024-12-06 11:13:14.702807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.702843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.716470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ef6a8 00:17:03.606 [2024-12-06 11:13:14.717253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.717297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.730934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ef270 00:17:03.606 [2024-12-06 11:13:14.731714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.731751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:03.606 [2024-12-06 11:13:14.745195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eee38 00:17:03.606 [2024-12-06 11:13:14.745939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.606 [2024-12-06 11:13:14.745976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.760655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eea00 00:17:03.866 [2024-12-06 11:13:14.761365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.761401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.775556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ee5c8 00:17:03.866 [2024-12-06 11:13:14.776280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.776320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.792003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ee190 00:17:03.866 [2024-12-06 11:13:14.793010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.793041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.807771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190edd58 00:17:03.866 [2024-12-06 11:13:14.808422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.808622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.822301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ed920 00:17:03.866 [2024-12-06 11:13:14.823187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.823218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:03.866 [2024-12-06 11:13:14.838966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ed4e8 00:17:03.866 [2024-12-06 11:13:14.839700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.866 [2024-12-06 11:13:14.839740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.854178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ed0b0 00:17:03.867 [2024-12-06 11:13:14.854809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.854846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.868549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ecc78 00:17:03.867 [2024-12-06 11:13:14.869375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.869403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.883536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ec840 00:17:03.867 [2024-12-06 11:13:14.884312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.898041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ec408 00:17:03.867 [2024-12-06 11:13:14.898700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.898739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.912670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ebfd0 00:17:03.867 [2024-12-06 11:13:14.913271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.913307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.928240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ebb98 00:17:03.867 [2024-12-06 11:13:14.928885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.928937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.944158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eb760 00:17:03.867 [2024-12-06 11:13:14.944747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.944784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.959423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eb328 00:17:03.867 [2024-12-06 11:13:14.960088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.960118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.973862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eaef0 00:17:03.867 [2024-12-06 11:13:14.974432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.974467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:14.988567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190eaab8 00:17:03.867 [2024-12-06 11:13:14.989211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:14.989260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:03.867 [2024-12-06 11:13:15.004774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ea680 00:17:03.867 [2024-12-06 11:13:15.005384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.867 [2024-12-06 11:13:15.005425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.021551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190ea248 00:17:04.126 [2024-12-06 11:13:15.022169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.022372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.037681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e9e10 00:17:04.126 [2024-12-06 11:13:15.038321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.038361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.054140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e99d8 00:17:04.126 [2024-12-06 11:13:15.054661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.054716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.069071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e95a0 00:17:04.126 [2024-12-06 11:13:15.069528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.069607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.084199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e9168 00:17:04.126 [2024-12-06 11:13:15.084910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.084943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.101420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e8d30 00:17:04.126 [2024-12-06 11:13:15.101963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.101999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.117110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e88f8 00:17:04.126 [2024-12-06 11:13:15.117567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.117618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.131950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e84c0 00:17:04.126 [2024-12-06 11:13:15.132500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.146816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e8088 00:17:04.126 [2024-12-06 11:13:15.147292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.147338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.161736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e7c50 00:17:04.126 [2024-12-06 11:13:15.162152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.162229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.177083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e7818 00:17:04.126 [2024-12-06 11:13:15.177740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.177775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.192309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e73e0 00:17:04.126 [2024-12-06 11:13:15.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.192757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.207363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e6fa8 00:17:04.126 [2024-12-06 11:13:15.207985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.208035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.222527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e6b70 00:17:04.126 [2024-12-06 11:13:15.223207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.223279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.237524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e6738 00:17:04.126 [2024-12-06 11:13:15.238180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.238233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.253550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e6300 00:17:04.126 [2024-12-06 11:13:15.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.254202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.126 [2024-12-06 11:13:15.269008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e5ec8 00:17:04.126 [2024-12-06 11:13:15.269621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.126 [2024-12-06 11:13:15.269663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.284207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e5a90 00:17:04.384 [2024-12-06 11:13:15.284786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.284821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.298851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e5658 00:17:04.384 [2024-12-06 11:13:15.299371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.299404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.314345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e5220 00:17:04.384 [2024-12-06 11:13:15.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.314922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.328984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e4de8 00:17:04.384 [2024-12-06 11:13:15.329281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.329307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.343755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e49b0 00:17:04.384 [2024-12-06 11:13:15.344090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.344117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.360070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e4578 00:17:04.384 [2024-12-06 11:13:15.360342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.360367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.376154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e4140 00:17:04.384 [2024-12-06 11:13:15.376436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.376462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.391491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e3d08 00:17:04.384 [2024-12-06 11:13:15.391825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.391857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.406484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e38d0 00:17:04.384 [2024-12-06 11:13:15.406800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.406832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.421440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e3498 00:17:04.384 [2024-12-06 11:13:15.421784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.421815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.436235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e3060 00:17:04.384 [2024-12-06 11:13:15.436476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.436496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.452015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e2c28 00:17:04.384 [2024-12-06 11:13:15.452285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.452314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.467736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e27f0 00:17:04.384 [2024-12-06 11:13:15.468140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.468172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.483129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e23b8 00:17:04.384 [2024-12-06 11:13:15.483417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.483440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.498121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e1f80 00:17:04.384 [2024-12-06 11:13:15.498488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.498514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.513007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e1b48 00:17:04.384 [2024-12-06 11:13:15.513200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.513221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:04.384 [2024-12-06 11:13:15.527351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e1710 00:17:04.384 [2024-12-06 11:13:15.527624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.384 [2024-12-06 11:13:15.527680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.542307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e12d8 00:17:04.643 [2024-12-06 11:13:15.542481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.542501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.556649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e0ea0 00:17:04.643 [2024-12-06 11:13:15.556815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.556835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.571771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e0a68 00:17:04.643 [2024-12-06 11:13:15.572117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.572138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.586306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e0630 00:17:04.643 [2024-12-06 11:13:15.586456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.586476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.600828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190e01f8 00:17:04.643 [2024-12-06 11:13:15.600985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.601006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.615069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190dfdc0 00:17:04.643 [2024-12-06 11:13:15.615199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.615220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.629243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190df988 00:17:04.643 [2024-12-06 11:13:15.629366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.629386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.644368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190df550 00:17:04.643 [2024-12-06 11:13:15.644698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.644738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.659369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190df118 00:17:04.643 [2024-12-06 11:13:15.659643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.674058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190dece0 00:17:04.643 [2024-12-06 11:13:15.674373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.674648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.689022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190de8a8 00:17:04.643 [2024-12-06 11:13:15.689294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.689454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.704397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190de038 00:17:04.643 [2024-12-06 11:13:15.704710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.705016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.724853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190de038 00:17:04.643 [2024-12-06 11:13:15.726345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.726574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.739413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190de470 00:17:04.643 [2024-12-06 11:13:15.740887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.741091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.643 [2024-12-06 11:13:15.754139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1056d30) with pdu=0x2000190de8a8 00:17:04.643 [2024-12-06 11:13:15.755753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.643 [2024-12-06 11:13:15.755972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:04.643 00:17:04.643 Latency(us) 00:17:04.643 [2024-12-06T11:13:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.643 [2024-12-06T11:13:15.790Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.643 nvme0n1 : 2.00 16602.40 64.85 0.00 0.00 7703.62 6821.70 20494.89 00:17:04.643 [2024-12-06T11:13:15.790Z] =================================================================================================================== 00:17:04.643 [2024-12-06T11:13:15.790Z] Total : 16602.40 64.85 0.00 0.00 7703.62 6821.70 20494.89 00:17:04.643 0 00:17:04.643 11:13:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:04.643 11:13:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:04.643 11:13:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:04.643 11:13:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:04.643 | .driver_specific 00:17:04.643 | .nvme_error 00:17:04.643 | .status_code 00:17:04.643 | .command_transient_transport_error' 00:17:05.210 11:13:16 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:17:05.210 11:13:16 -- host/digest.sh@73 -- # killprocess 83876 00:17:05.210 11:13:16 -- common/autotest_common.sh@936 -- # '[' -z 83876 ']' 00:17:05.210 11:13:16 -- common/autotest_common.sh@940 -- # kill -0 83876 00:17:05.210 11:13:16 -- common/autotest_common.sh@941 -- # uname 00:17:05.210 11:13:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.210 11:13:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83876 00:17:05.210 killing process with pid 83876 00:17:05.210 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.210 00:17:05.210 Latency(us) 00:17:05.210 [2024-12-06T11:13:16.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.210 [2024-12-06T11:13:16.357Z] =================================================================================================================== 00:17:05.210 [2024-12-06T11:13:16.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.210 11:13:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:05.210 11:13:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:05.210 11:13:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83876' 00:17:05.210 11:13:16 -- common/autotest_common.sh@955 -- # kill 83876 00:17:05.210 11:13:16 -- common/autotest_common.sh@960 -- # wait 83876 00:17:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.210 11:13:16 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:05.210 11:13:16 -- host/digest.sh@54 -- # local rw bs qd 00:17:05.210 11:13:16 -- host/digest.sh@56 -- # rw=randwrite 00:17:05.210 11:13:16 -- host/digest.sh@56 -- # bs=131072 00:17:05.210 11:13:16 -- host/digest.sh@56 -- # qd=16 00:17:05.210 11:13:16 -- host/digest.sh@58 -- # bperfpid=83929 00:17:05.210 11:13:16 -- host/digest.sh@60 -- # waitforlisten 83929 /var/tmp/bperf.sock 00:17:05.210 11:13:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:05.210 11:13:16 -- common/autotest_common.sh@829 -- # '[' -z 83929 ']' 00:17:05.210 11:13:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.210 11:13:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.210 11:13:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.210 11:13:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.210 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:17:05.210 [2024-12-06 11:13:16.284714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:05.210 [2024-12-06 11:13:16.285049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83929 ] 00:17:05.210 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:05.210 Zero copy mechanism will not be used. 00:17:05.468 [2024-12-06 11:13:16.417653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.468 [2024-12-06 11:13:16.450254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.402 11:13:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.402 11:13:17 -- common/autotest_common.sh@862 -- # return 0 00:17:06.402 11:13:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.402 11:13:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.402 11:13:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.402 11:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.402 11:13:17 -- common/autotest_common.sh@10 -- # set +x 00:17:06.402 11:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.402 11:13:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.402 11:13:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.968 nvme0n1 00:17:06.968 11:13:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:06.968 11:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.968 11:13:17 -- common/autotest_common.sh@10 -- # set +x 00:17:06.968 11:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.968 11:13:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:06.968 11:13:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:06.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:06.968 Zero copy mechanism will not be used. 00:17:06.968 Running I/O for 2 seconds... 00:17:06.968 [2024-12-06 11:13:17.933896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.968 [2024-12-06 11:13:17.934212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.968 [2024-12-06 11:13:17.934242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.968 [2024-12-06 11:13:17.938807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.968 [2024-12-06 11:13:17.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.968 [2024-12-06 11:13:17.939136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.968 [2024-12-06 11:13:17.943648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.968 [2024-12-06 11:13:17.943956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.968 [2024-12-06 11:13:17.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.968 [2024-12-06 11:13:17.948430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.968 [2024-12-06 11:13:17.948979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.968 [2024-12-06 11:13:17.949028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.968 [2024-12-06 11:13:17.953398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.953694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.953723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.958147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.958428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.958456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.962948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.963228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.963281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.967727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.968009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.968037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.972616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.972960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.972989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.977737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.978085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.983021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.983351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.983382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.988289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.988806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.988842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.993991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.994278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.994307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:17.999129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:17.999462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:17.999493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.004312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.004856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.004891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.009712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.010079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.014764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.015065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.015092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.019621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.019917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.024432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.024941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.024989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.029409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.029737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.029770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.034162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.034441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.034468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.039007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.039327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.039355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.044453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.044967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.045002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.050018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.050372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.050401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.055612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.055968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.056001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.061380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.061769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.061808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.066786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.067149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.067177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.072327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.072665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.072705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.077887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.078251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.078280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.083357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.083722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.083781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.088809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.969 [2024-12-06 11:13:18.089165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.969 [2024-12-06 11:13:18.089195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.969 [2024-12-06 11:13:18.094370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.970 [2024-12-06 11:13:18.094704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.970 [2024-12-06 11:13:18.094773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.970 [2024-12-06 11:13:18.099993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.970 [2024-12-06 11:13:18.100325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.970 [2024-12-06 11:13:18.100352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.970 [2024-12-06 11:13:18.105418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.970 [2024-12-06 11:13:18.105970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.970 [2024-12-06 11:13:18.106006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.970 [2024-12-06 11:13:18.111345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:06.970 [2024-12-06 11:13:18.111693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.970 [2024-12-06 11:13:18.111751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.117067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.117625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.117687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.122587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.122920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.122950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.127817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.128145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.128173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.132953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.133270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.133298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.138136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.138413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.138441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.143288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.143782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.143832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.148394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.148720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.148756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.153807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.154124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.158778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.159061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.159089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.163475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.163812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.163861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.168231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.168509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.168547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.238 [2024-12-06 11:13:18.172999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.238 [2024-12-06 11:13:18.173281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.238 [2024-12-06 11:13:18.173309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.177783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.178068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.182536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.182872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.182905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.187455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.187770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.187803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.192339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.192683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.192727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.197263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.197759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.197813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.202254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.202538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.202577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.206948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.207227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.207296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.211914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.212216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.216669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.216949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.216977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.221344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.221889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.221924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.226260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.226542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.226580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.230975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.231281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.235777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.236058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.240579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.240859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.240886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.245241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.245747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.245800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.250342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.250652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.250680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.255053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.255381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.259927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.260234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.264731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.265012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.265040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.269492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.270022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.270071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.274477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.274782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.274810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.279265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.279576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.279614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.284051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.284330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.284357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.288785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.289064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.289091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.293493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.293840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.293874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.298371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.298658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.298685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.303219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.239 [2024-12-06 11:13:18.303555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.239 [2024-12-06 11:13:18.303596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.239 [2024-12-06 11:13:18.308220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.308710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.308759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.313165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.313444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.313471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.318161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.318522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.323329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.323656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.323685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.328470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.329014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.329078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.333706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.334011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.338508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.338870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.338904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.343471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.343776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.343809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.348284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.348759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.348807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.353272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.353568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.353594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.357940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.358219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.358246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.362639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.362921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.362949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.367329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.367649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.367676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.240 [2024-12-06 11:13:18.372518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.240 [2024-12-06 11:13:18.372992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.240 [2024-12-06 11:13:18.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.378011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.378370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.383315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.383680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.383727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.388695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.389045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.393771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.394059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.394088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.398498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.398824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.398853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.403333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.403670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.403698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.408510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.409007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.409043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.413999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.414304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.414334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.419401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.419783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.419817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.424395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.424896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.424946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.429865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.430279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.435494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.435834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.435870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.440838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.441195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.441224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.445650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.445930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.445957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.450320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.450627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.450655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.455057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.455393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.455424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.459918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.460199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.460227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.464661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.464950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.469295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.469588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.469615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.474008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.474288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.474315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.478767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.479048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.483441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.483769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.483801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.488244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.517 [2024-12-06 11:13:18.488693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.517 [2024-12-06 11:13:18.488732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.517 [2024-12-06 11:13:18.493201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.493480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.493507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.497863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.498140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.502501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.502839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.502871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.507334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.507669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.507698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.512155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.512434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.512461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.516895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.517177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.517205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.521553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.521863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.526189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.526467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.526495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.530842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.531119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.531146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.535494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.535813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.535861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.540239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.540724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.545111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.545392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.545420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.549819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.550102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.550129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.554592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.554914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.554942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.559773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.560127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.560155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.565448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.565902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.565952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.570999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.571392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.571425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.576332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.576837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.576873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.581792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.582134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.582163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.587177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.587539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.587582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.592600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.593162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.597964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.598271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.603024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.603366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.603396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.608324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.608827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.608876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.613490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.613832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.613861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.618379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.618702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.618770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.623180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.623503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.518 [2024-12-06 11:13:18.623533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.518 [2024-12-06 11:13:18.628078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.518 [2024-12-06 11:13:18.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.628412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.633126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.633416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.633446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.637986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.638276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.638304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.642854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.643146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.643175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.647959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.648431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.648467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.653132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.653424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.653452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.519 [2024-12-06 11:13:18.658145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.519 [2024-12-06 11:13:18.658483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.519 [2024-12-06 11:13:18.658513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.663411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.663857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.663894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.668642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.668939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.668968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.673721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.674029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.674059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.678823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.679131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.679161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.683907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.684215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.689087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.689406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.694224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.694712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.694748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.699878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.700394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.705395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.705938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.706134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.710944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.711453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.716870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.717360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.717525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.722215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.722724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.727752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.728255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.728417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.733073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.733545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.733748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.738442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.738972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.739167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.743719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.744190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.744386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.749024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.749509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.749720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.754460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.755005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.755227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.759926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.760419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.760596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.765337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.778 [2024-12-06 11:13:18.765836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-12-06 11:13:18.766076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-12-06 11:13:18.770698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.770997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.771026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.775527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.775884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.775949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.780580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.780954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.780996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.785550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.785977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.790816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.791144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.791174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.796246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.796535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.796606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.801462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.801886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.807020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.807365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.807392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.812595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.812980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.813009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.818190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.818707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.818995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.824013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.824476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.824686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.829812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.830303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.830488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.835564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.836077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.836249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.841177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.841454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.841482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.846052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.846497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.846693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.851267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.852018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.856933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.857449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.862295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.862762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.862793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.867705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.868011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.868042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.872879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.873171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.873200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.877847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.878119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.878147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.882927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.883423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.883603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.888329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.888851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.893409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.893749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.898323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.898807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.898838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.903117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.903409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.903436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.908011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.908308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.908335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.912852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.913114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.913141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.779 [2024-12-06 11:13:18.917674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:07.779 [2024-12-06 11:13:18.917954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.779 [2024-12-06 11:13:18.917997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.923490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.924067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.924258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.929128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.929590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.929755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.934498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.934988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.935013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.939876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.940336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.940504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.945159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.945628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.945798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.950635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.951141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.951350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.956641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.957152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.957361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.962378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.962939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.963172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.967941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.968413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.968667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.973396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.973884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.974123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.978695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.978983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.979011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.983460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.983808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.983840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.988424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.988780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.988814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.993449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.993829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.993864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:18.998630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:18.998976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:18.999004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:19.003768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:19.004087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-12-06 11:13:19.004120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-12-06 11:13:19.008876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.040 [2024-12-06 11:13:19.009209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.009238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.014117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.014397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.019109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.019448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.019476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.024199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.024478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.024506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.029273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.029573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.034264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.034767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.034801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.039353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.039682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.039711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.044261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.044543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.044580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.049002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.049297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.049325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.053807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.054129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.054157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.059266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.059625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.059667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.064772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.065087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.065165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.070371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.070825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.070880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.075992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.076611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.076698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.081809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.082155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.082184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.087225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.087705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.087767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.092750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.093072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.093101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.097820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.098161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.102566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.102856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.102884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.107335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.107822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.107871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.112292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.112646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.117106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.117560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.117634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.122081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.122361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.122389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.126765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.127048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.127075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.131515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.132036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.132083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.136526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.136882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.141262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.141542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.041 [2024-12-06 11:13:19.141579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.041 [2024-12-06 11:13:19.145935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.041 [2024-12-06 11:13:19.146216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.146243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.150654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.150958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.150986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.155415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.155919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.155966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.160406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.160734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.165157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.165437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.165464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.169865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.170147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.170174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.174604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.174886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.174914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.042 [2024-12-06 11:13:19.179368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.042 [2024-12-06 11:13:19.179825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.042 [2024-12-06 11:13:19.179861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.184971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.185313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.185355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.190099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.190425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.190454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.194919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.195195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.195223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.199745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.200025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.200053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.204413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.204734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.204766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.209210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.209490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.209517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.214462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.214848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.219768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.220045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.220073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.224508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.224872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.224923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.229364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.229678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.229706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.234167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.234447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.234475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.238997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.302 [2024-12-06 11:13:19.239306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-12-06 11:13:19.239335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-12-06 11:13:19.243905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.244187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.244214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.248672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.248962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.248990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.253349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.253660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.253688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.258138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.258419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.258446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.262912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.263193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.267729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.267998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.268025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.272430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.272733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.272763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.277149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.277428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.281950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.282223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.282254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.286646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.286918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.286945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.291217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.291771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.296222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.296638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.296693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.301033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.301388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.301428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.305766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.306128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.306167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.310984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.311333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.311364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.316301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.316610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.316650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.321758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.322204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.327282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.327650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.327679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.332680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.333186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.333300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.338428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.338768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.338813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.343894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.344271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.344314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.349415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.349751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.354819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.355227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.360341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.360643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.360680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.365765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.366156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.366185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.371206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.371535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.371605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.376819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.303 [2024-12-06 11:13:19.377172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.303 [2024-12-06 11:13:19.377231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.303 [2024-12-06 11:13:19.382165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.382443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.382471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.387667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.388146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.388269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.392921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.393213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.393247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.398233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.398339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.398371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.403400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.403500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.403534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.408441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.408759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.413574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.413766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.418240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.418352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.418384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.422818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.422930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.422962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.427327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.427423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.427457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.432191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.432492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.437291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.437399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.437431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.304 [2024-12-06 11:13:19.442295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.304 [2024-12-06 11:13:19.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.304 [2024-12-06 11:13:19.442438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.447851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.447943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.447974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.452854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.452963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.452994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.457355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.457460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.457491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.462133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.462237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.462268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.466797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.466904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.466933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.472007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.472151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.472183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.477243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.477353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.477383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.481989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.482091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.482121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.486607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.486711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.486741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.491202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.491336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.491369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.495975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.496084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.496117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.500631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.500736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.500768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.505121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.505229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-12-06 11:13:19.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-12-06 11:13:19.509774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.564 [2024-12-06 11:13:19.509896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.509927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.514495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.514621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.514646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.519226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.519346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.519369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.523889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.523998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.524021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.528534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.528659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.528682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.533158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.533412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.533435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.538105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.538202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.538224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.542771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.542867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.542889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.547504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.547655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.547693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.552265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.552358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.552379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.556969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.557065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.557087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.561783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.561877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.561898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.566452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.566547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.566596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.571095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.571190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.571210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.575797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.575893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.575915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.580466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.580560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.580609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.585257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.585508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.585530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.590263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.590360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.590381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.595058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.595152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.595173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.599873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.600004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.604474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.604567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.604619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.609117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.609211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.609232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.613732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.613847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.618378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.618474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.618495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.623073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.623166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.623187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.627866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.627960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.627981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.632480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.632601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.632622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.565 [2024-12-06 11:13:19.637204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.565 [2024-12-06 11:13:19.637455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.565 [2024-12-06 11:13:19.637477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.642224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.642320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.642341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.646893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.646989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.651707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.651799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.651822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.656293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.656387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.656408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.661159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.661271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.661292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.666096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.666507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.671524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.671702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.671725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.676532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.676674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.676713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.681387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.681682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.681706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.686417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.686510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.686531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.691345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.691441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.691465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.696946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.697113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.566 [2024-12-06 11:13:19.702075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.566 [2024-12-06 11:13:19.702186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.566 [2024-12-06 11:13:19.702207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.707639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.707793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.712626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.712758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.712780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.717934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.718032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.718055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.723129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.723226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.723274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.728691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.728815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.728838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.734250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.734353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.734375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.739502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.739617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.744587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.744875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.744899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.749864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.749964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.749985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.754724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.754821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.754842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.759726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.759826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.759847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.764458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.764554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.764591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.769422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.769518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.769540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.774349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.774446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.774467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.779217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.779476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.779499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.784197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.784276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.784312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.789194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.789291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.789312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.793905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.794001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.794023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.798800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.798906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.798927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.803666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.803766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.803787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.808301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.808398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.808418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.813389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.827 [2024-12-06 11:13:19.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.827 [2024-12-06 11:13:19.813507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.827 [2024-12-06 11:13:19.818237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.818333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.818354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.823024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.823119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.823141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.828132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.828230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.828251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.832942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.833070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.833092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.837763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.837857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.837878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.842739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.842839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.842860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.847464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.847775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.847798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.852763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.852879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.852899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.857503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.857630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.857653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.862299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.862395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.862416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.867283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.867520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.872596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.872701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.872722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.877374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.877470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.877494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.883114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.883382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.888449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.888774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.889199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.893771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.894025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.894182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.898652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.898895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.899051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.903683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.903935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.904092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.908746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.908990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.909249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.913791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.914038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.914236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.918649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.918947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.919144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.923883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.923976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.923998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.828 [2024-12-06 11:13:19.928445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1055860) with pdu=0x2000190fef90 00:17:08.828 [2024-12-06 11:13:19.928529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.828 [2024-12-06 11:13:19.928551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.828 00:17:08.828 Latency(us) 00:17:08.828 [2024-12-06T11:13:19.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.828 [2024-12-06T11:13:19.975Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:08.828 nvme0n1 : 2.00 6138.38 767.30 0.00 0.00 2601.14 1422.43 6017.40 00:17:08.828 [2024-12-06T11:13:19.975Z] =================================================================================================================== 00:17:08.828 [2024-12-06T11:13:19.975Z] Total : 6138.38 767.30 0.00 0.00 2601.14 1422.43 6017.40 00:17:08.828 0 00:17:08.828 11:13:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:08.828 11:13:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:08.828 | .driver_specific 00:17:08.828 | .nvme_error 00:17:08.828 | .status_code 00:17:08.828 | .command_transient_transport_error' 00:17:08.828 11:13:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:08.828 11:13:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:09.087 11:13:20 -- host/digest.sh@71 -- # (( 396 > 0 )) 00:17:09.087 11:13:20 -- host/digest.sh@73 -- # killprocess 83929 00:17:09.087 11:13:20 -- common/autotest_common.sh@936 -- # '[' -z 83929 ']' 00:17:09.087 11:13:20 -- common/autotest_common.sh@940 -- # kill -0 83929 00:17:09.087 11:13:20 -- common/autotest_common.sh@941 -- # uname 00:17:09.087 11:13:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.087 11:13:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83929 00:17:09.349 killing process with pid 83929 00:17:09.349 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.349 00:17:09.349 Latency(us) 00:17:09.349 [2024-12-06T11:13:20.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.349 [2024-12-06T11:13:20.496Z] =================================================================================================================== 00:17:09.349 [2024-12-06T11:13:20.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.349 11:13:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:09.349 11:13:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:09.349 11:13:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83929' 00:17:09.349 11:13:20 -- common/autotest_common.sh@955 -- # kill 83929 00:17:09.349 11:13:20 -- common/autotest_common.sh@960 -- # wait 83929 00:17:09.349 11:13:20 -- host/digest.sh@115 -- # killprocess 83757 00:17:09.349 11:13:20 -- common/autotest_common.sh@936 -- # '[' -z 83757 ']' 00:17:09.349 11:13:20 -- common/autotest_common.sh@940 -- # kill -0 83757 00:17:09.349 11:13:20 -- common/autotest_common.sh@941 -- # uname 00:17:09.349 11:13:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.349 11:13:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83757 00:17:09.349 killing process with pid 83757 00:17:09.349 11:13:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.349 11:13:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.349 11:13:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83757' 00:17:09.349 11:13:20 -- common/autotest_common.sh@955 -- # kill 83757 00:17:09.349 11:13:20 -- common/autotest_common.sh@960 -- # wait 83757 00:17:09.609 00:17:09.609 real 0m15.211s 00:17:09.609 user 0m29.494s 00:17:09.609 sys 0m4.476s 00:17:09.609 11:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.609 ************************************ 00:17:09.609 END TEST nvmf_digest_error 00:17:09.609 ************************************ 00:17:09.609 11:13:20 -- common/autotest_common.sh@10 -- # set +x 00:17:09.609 11:13:20 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:09.609 11:13:20 -- host/digest.sh@139 -- # nvmftestfini 00:17:09.609 11:13:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.609 11:13:20 -- nvmf/common.sh@116 -- # sync 00:17:09.609 11:13:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:09.609 11:13:20 -- nvmf/common.sh@119 -- # set +e 00:17:09.609 11:13:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:09.609 11:13:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:09.609 rmmod nvme_tcp 00:17:09.609 rmmod nvme_fabrics 00:17:09.609 rmmod nvme_keyring 00:17:09.609 11:13:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:09.609 Process with pid 83757 is not found 00:17:09.609 11:13:20 -- nvmf/common.sh@123 -- # set -e 00:17:09.609 11:13:20 -- nvmf/common.sh@124 -- # return 0 00:17:09.609 11:13:20 -- nvmf/common.sh@477 -- # '[' -n 83757 ']' 00:17:09.609 11:13:20 -- nvmf/common.sh@478 -- # killprocess 83757 00:17:09.609 11:13:20 -- common/autotest_common.sh@936 -- # '[' -z 83757 ']' 00:17:09.609 11:13:20 -- common/autotest_common.sh@940 -- # kill -0 83757 00:17:09.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83757) - No such process 00:17:09.610 11:13:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83757 is not found' 00:17:09.610 11:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:09.610 11:13:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:09.610 11:13:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:09.610 11:13:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.610 11:13:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:09.610 11:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.610 11:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.610 11:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.610 11:13:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:09.610 00:17:09.610 real 0m31.407s 00:17:09.610 user 0m59.740s 00:17:09.610 sys 0m9.244s 00:17:09.610 11:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.610 11:13:20 -- common/autotest_common.sh@10 -- # set +x 00:17:09.610 ************************************ 00:17:09.610 END TEST nvmf_digest 00:17:09.610 ************************************ 00:17:09.869 11:13:20 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:09.869 11:13:20 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:09.869 11:13:20 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:09.869 11:13:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:09.869 11:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.869 11:13:20 -- common/autotest_common.sh@10 -- # set +x 00:17:09.869 ************************************ 00:17:09.869 START TEST nvmf_multipath 00:17:09.869 ************************************ 00:17:09.869 11:13:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:09.869 * Looking for test storage... 00:17:09.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:09.869 11:13:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:09.869 11:13:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:09.869 11:13:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:09.869 11:13:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:09.869 11:13:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:09.869 11:13:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:09.869 11:13:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:09.869 11:13:20 -- scripts/common.sh@335 -- # IFS=.-: 00:17:09.869 11:13:20 -- scripts/common.sh@335 -- # read -ra ver1 00:17:09.869 11:13:20 -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.869 11:13:20 -- scripts/common.sh@336 -- # read -ra ver2 00:17:09.869 11:13:20 -- scripts/common.sh@337 -- # local 'op=<' 00:17:09.869 11:13:20 -- scripts/common.sh@339 -- # ver1_l=2 00:17:09.869 11:13:20 -- scripts/common.sh@340 -- # ver2_l=1 00:17:09.869 11:13:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:09.869 11:13:20 -- scripts/common.sh@343 -- # case "$op" in 00:17:09.870 11:13:20 -- scripts/common.sh@344 -- # : 1 00:17:09.870 11:13:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:09.870 11:13:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.870 11:13:20 -- scripts/common.sh@364 -- # decimal 1 00:17:09.870 11:13:20 -- scripts/common.sh@352 -- # local d=1 00:17:09.870 11:13:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.870 11:13:20 -- scripts/common.sh@354 -- # echo 1 00:17:09.870 11:13:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:09.870 11:13:20 -- scripts/common.sh@365 -- # decimal 2 00:17:09.870 11:13:20 -- scripts/common.sh@352 -- # local d=2 00:17:09.870 11:13:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.870 11:13:20 -- scripts/common.sh@354 -- # echo 2 00:17:09.870 11:13:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:09.870 11:13:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:09.870 11:13:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:09.870 11:13:21 -- scripts/common.sh@367 -- # return 0 00:17:09.870 11:13:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.870 11:13:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.870 --rc genhtml_branch_coverage=1 00:17:09.870 --rc genhtml_function_coverage=1 00:17:09.870 --rc genhtml_legend=1 00:17:09.870 --rc geninfo_all_blocks=1 00:17:09.870 --rc geninfo_unexecuted_blocks=1 00:17:09.870 00:17:09.870 ' 00:17:09.870 11:13:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.870 --rc genhtml_branch_coverage=1 00:17:09.870 --rc genhtml_function_coverage=1 00:17:09.870 --rc genhtml_legend=1 00:17:09.870 --rc geninfo_all_blocks=1 00:17:09.870 --rc geninfo_unexecuted_blocks=1 00:17:09.870 00:17:09.870 ' 00:17:09.870 11:13:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.870 --rc genhtml_branch_coverage=1 00:17:09.870 --rc genhtml_function_coverage=1 00:17:09.870 --rc genhtml_legend=1 00:17:09.870 --rc geninfo_all_blocks=1 00:17:09.870 --rc geninfo_unexecuted_blocks=1 00:17:09.870 00:17:09.870 ' 00:17:09.870 11:13:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.870 --rc genhtml_branch_coverage=1 00:17:09.870 --rc genhtml_function_coverage=1 00:17:09.870 --rc genhtml_legend=1 00:17:09.870 --rc geninfo_all_blocks=1 00:17:09.870 --rc geninfo_unexecuted_blocks=1 00:17:09.870 00:17:09.870 ' 00:17:09.870 11:13:21 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.870 11:13:21 -- nvmf/common.sh@7 -- # uname -s 00:17:09.870 11:13:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.870 11:13:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.870 11:13:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.870 11:13:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.870 11:13:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.870 11:13:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.870 11:13:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.870 11:13:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.870 11:13:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.870 11:13:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.130 11:13:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:17:10.130 11:13:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:17:10.130 11:13:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.130 11:13:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.130 11:13:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.130 11:13:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.130 11:13:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.130 11:13:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.130 11:13:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.130 11:13:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.130 11:13:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.130 11:13:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.130 11:13:21 -- paths/export.sh@5 -- # export PATH 00:17:10.130 11:13:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.130 11:13:21 -- nvmf/common.sh@46 -- # : 0 00:17:10.130 11:13:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:10.130 11:13:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:10.130 11:13:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:10.130 11:13:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.131 11:13:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.131 11:13:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:10.131 11:13:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:10.131 11:13:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:10.131 11:13:21 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.131 11:13:21 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.131 11:13:21 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.131 11:13:21 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:10.131 11:13:21 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.131 11:13:21 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:10.131 11:13:21 -- host/multipath.sh@30 -- # nvmftestinit 00:17:10.131 11:13:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:10.131 11:13:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.131 11:13:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:10.131 11:13:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:10.131 11:13:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:10.131 11:13:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.131 11:13:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.131 11:13:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.131 11:13:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:10.131 11:13:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:10.131 11:13:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:10.131 11:13:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:10.131 11:13:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:10.131 11:13:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:10.131 11:13:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.131 11:13:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.131 11:13:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.131 11:13:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:10.131 11:13:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.131 11:13:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.131 11:13:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.131 11:13:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.131 11:13:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.131 11:13:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.131 11:13:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.131 11:13:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.131 11:13:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:10.131 11:13:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:10.131 Cannot find device "nvmf_tgt_br" 00:17:10.131 11:13:21 -- nvmf/common.sh@154 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.131 Cannot find device "nvmf_tgt_br2" 00:17:10.131 11:13:21 -- nvmf/common.sh@155 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:10.131 11:13:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:10.131 Cannot find device "nvmf_tgt_br" 00:17:10.131 11:13:21 -- nvmf/common.sh@157 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:10.131 Cannot find device "nvmf_tgt_br2" 00:17:10.131 11:13:21 -- nvmf/common.sh@158 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:10.131 11:13:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:10.131 11:13:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.131 11:13:21 -- nvmf/common.sh@161 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.131 11:13:21 -- nvmf/common.sh@162 -- # true 00:17:10.131 11:13:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.131 11:13:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.131 11:13:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.131 11:13:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.131 11:13:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.131 11:13:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.131 11:13:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.131 11:13:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.131 11:13:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.390 11:13:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:10.390 11:13:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:10.390 11:13:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:10.390 11:13:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:10.390 11:13:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.390 11:13:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.390 11:13:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.390 11:13:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:10.390 11:13:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:10.390 11:13:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.390 11:13:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.390 11:13:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.390 11:13:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.390 11:13:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.390 11:13:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:10.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:10.390 00:17:10.390 --- 10.0.0.2 ping statistics --- 00:17:10.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.390 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:10.390 11:13:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:10.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:10.390 00:17:10.390 --- 10.0.0.3 ping statistics --- 00:17:10.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.390 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:10.390 11:13:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:10.390 00:17:10.390 --- 10.0.0.1 ping statistics --- 00:17:10.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.390 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:10.390 11:13:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.390 11:13:21 -- nvmf/common.sh@421 -- # return 0 00:17:10.390 11:13:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:10.390 11:13:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.390 11:13:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:10.390 11:13:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:10.390 11:13:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.391 11:13:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:10.391 11:13:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:10.391 11:13:21 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:10.391 11:13:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.391 11:13:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.391 11:13:21 -- common/autotest_common.sh@10 -- # set +x 00:17:10.391 11:13:21 -- nvmf/common.sh@469 -- # nvmfpid=84205 00:17:10.391 11:13:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:10.391 11:13:21 -- nvmf/common.sh@470 -- # waitforlisten 84205 00:17:10.391 11:13:21 -- common/autotest_common.sh@829 -- # '[' -z 84205 ']' 00:17:10.391 11:13:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.391 11:13:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.391 11:13:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.391 11:13:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.391 11:13:21 -- common/autotest_common.sh@10 -- # set +x 00:17:10.391 [2024-12-06 11:13:21.473964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.391 [2024-12-06 11:13:21.474067] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.649 [2024-12-06 11:13:21.613907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.650 [2024-12-06 11:13:21.646247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.650 [2024-12-06 11:13:21.646647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.650 [2024-12-06 11:13:21.646780] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.650 [2024-12-06 11:13:21.646931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.650 [2024-12-06 11:13:21.647306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.650 [2024-12-06 11:13:21.647316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.587 11:13:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.587 11:13:22 -- common/autotest_common.sh@862 -- # return 0 00:17:11.587 11:13:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.587 11:13:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.587 11:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:11.587 11:13:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.588 11:13:22 -- host/multipath.sh@33 -- # nvmfapp_pid=84205 00:17:11.588 11:13:22 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:11.588 [2024-12-06 11:13:22.667905] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.588 11:13:22 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:11.846 Malloc0 00:17:11.846 11:13:22 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:12.104 11:13:23 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.362 11:13:23 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.620 [2024-12-06 11:13:23.684452] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.620 11:13:23 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:12.879 [2024-12-06 11:13:24.004678] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:13.138 11:13:24 -- host/multipath.sh@44 -- # bdevperf_pid=84255 00:17:13.138 11:13:24 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:13.138 11:13:24 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.138 11:13:24 -- host/multipath.sh@47 -- # waitforlisten 84255 /var/tmp/bdevperf.sock 00:17:13.138 11:13:24 -- common/autotest_common.sh@829 -- # '[' -z 84255 ']' 00:17:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.138 11:13:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.138 11:13:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.138 11:13:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.138 11:13:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.138 11:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:14.073 11:13:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.073 11:13:24 -- common/autotest_common.sh@862 -- # return 0 00:17:14.073 11:13:24 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:14.331 11:13:25 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:14.588 Nvme0n1 00:17:14.588 11:13:25 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:14.846 Nvme0n1 00:17:14.846 11:13:25 -- host/multipath.sh@78 -- # sleep 1 00:17:14.846 11:13:25 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.225 11:13:26 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:16.225 11:13:26 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:16.225 11:13:27 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:16.485 11:13:27 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:16.485 11:13:27 -- host/multipath.sh@65 -- # dtrace_pid=84306 00:17:16.485 11:13:27 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.485 11:13:27 -- host/multipath.sh@66 -- # sleep 6 00:17:23.089 11:13:33 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:23.089 11:13:33 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:23.089 11:13:33 -- host/multipath.sh@67 -- # active_port=4421 00:17:23.089 11:13:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.089 Attaching 4 probes... 00:17:23.089 @path[10.0.0.2, 4421]: 20014 00:17:23.089 @path[10.0.0.2, 4421]: 20354 00:17:23.089 @path[10.0.0.2, 4421]: 20137 00:17:23.089 @path[10.0.0.2, 4421]: 20013 00:17:23.089 @path[10.0.0.2, 4421]: 20009 00:17:23.089 11:13:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:23.089 11:13:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:23.089 11:13:33 -- host/multipath.sh@69 -- # sed -n 1p 00:17:23.089 11:13:33 -- host/multipath.sh@69 -- # port=4421 00:17:23.089 11:13:33 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.089 11:13:33 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:23.089 11:13:33 -- host/multipath.sh@72 -- # kill 84306 00:17:23.089 11:13:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.089 11:13:33 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:23.089 11:13:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:23.089 11:13:34 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:23.348 11:13:34 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:23.348 11:13:34 -- host/multipath.sh@65 -- # dtrace_pid=84419 00:17:23.348 11:13:34 -- host/multipath.sh@66 -- # sleep 6 00:17:23.348 11:13:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:29.912 11:13:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:29.912 11:13:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:29.912 11:13:40 -- host/multipath.sh@67 -- # active_port=4420 00:17:29.912 11:13:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.912 Attaching 4 probes... 00:17:29.912 @path[10.0.0.2, 4420]: 19478 00:17:29.912 @path[10.0.0.2, 4420]: 19654 00:17:29.912 @path[10.0.0.2, 4420]: 19833 00:17:29.912 @path[10.0.0.2, 4420]: 20083 00:17:29.912 @path[10.0.0.2, 4420]: 20936 00:17:29.912 11:13:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:29.912 11:13:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:29.912 11:13:40 -- host/multipath.sh@69 -- # sed -n 1p 00:17:29.912 11:13:40 -- host/multipath.sh@69 -- # port=4420 00:17:29.912 11:13:40 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:29.912 11:13:40 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:29.912 11:13:40 -- host/multipath.sh@72 -- # kill 84419 00:17:29.912 11:13:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.912 11:13:40 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:29.912 11:13:40 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:29.912 11:13:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:30.171 11:13:41 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:30.171 11:13:41 -- host/multipath.sh@65 -- # dtrace_pid=84537 00:17:30.171 11:13:41 -- host/multipath.sh@66 -- # sleep 6 00:17:30.171 11:13:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:36.732 11:13:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:36.732 11:13:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:36.732 11:13:47 -- host/multipath.sh@67 -- # active_port=4421 00:17:36.732 11:13:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.732 Attaching 4 probes... 00:17:36.732 @path[10.0.0.2, 4421]: 14629 00:17:36.732 @path[10.0.0.2, 4421]: 20287 00:17:36.732 @path[10.0.0.2, 4421]: 20304 00:17:36.732 @path[10.0.0.2, 4421]: 20526 00:17:36.732 @path[10.0.0.2, 4421]: 19785 00:17:36.732 11:13:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:36.732 11:13:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:36.732 11:13:47 -- host/multipath.sh@69 -- # sed -n 1p 00:17:36.732 11:13:47 -- host/multipath.sh@69 -- # port=4421 00:17:36.732 11:13:47 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:36.732 11:13:47 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:36.732 11:13:47 -- host/multipath.sh@72 -- # kill 84537 00:17:36.732 11:13:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.732 11:13:47 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:36.732 11:13:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:36.733 11:13:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:36.992 11:13:48 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:36.992 11:13:48 -- host/multipath.sh@65 -- # dtrace_pid=84655 00:17:36.992 11:13:48 -- host/multipath.sh@66 -- # sleep 6 00:17:36.992 11:13:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:43.627 11:13:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:43.627 11:13:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:43.627 11:13:54 -- host/multipath.sh@67 -- # active_port= 00:17:43.627 11:13:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.627 Attaching 4 probes... 00:17:43.627 00:17:43.627 00:17:43.627 00:17:43.627 00:17:43.627 00:17:43.627 11:13:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:43.627 11:13:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:43.627 11:13:54 -- host/multipath.sh@69 -- # sed -n 1p 00:17:43.627 11:13:54 -- host/multipath.sh@69 -- # port= 00:17:43.627 11:13:54 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:43.627 11:13:54 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:43.627 11:13:54 -- host/multipath.sh@72 -- # kill 84655 00:17:43.627 11:13:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.627 11:13:54 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:43.627 11:13:54 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:43.627 11:13:54 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:43.886 11:13:54 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:43.886 11:13:54 -- host/multipath.sh@65 -- # dtrace_pid=84773 00:17:43.886 11:13:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:43.886 11:13:54 -- host/multipath.sh@66 -- # sleep 6 00:17:50.455 11:14:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:50.455 11:14:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:50.455 11:14:01 -- host/multipath.sh@67 -- # active_port=4421 00:17:50.455 11:14:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.455 Attaching 4 probes... 00:17:50.455 @path[10.0.0.2, 4421]: 19007 00:17:50.455 @path[10.0.0.2, 4421]: 19469 00:17:50.455 @path[10.0.0.2, 4421]: 19528 00:17:50.455 @path[10.0.0.2, 4421]: 19756 00:17:50.455 @path[10.0.0.2, 4421]: 20223 00:17:50.455 11:14:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:50.455 11:14:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:50.455 11:14:01 -- host/multipath.sh@69 -- # sed -n 1p 00:17:50.455 11:14:01 -- host/multipath.sh@69 -- # port=4421 00:17:50.455 11:14:01 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:50.455 11:14:01 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:50.455 11:14:01 -- host/multipath.sh@72 -- # kill 84773 00:17:50.455 11:14:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.455 11:14:01 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:50.455 [2024-12-06 11:14:01.368541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 [2024-12-06 11:14:01.368924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fd60 is same with the state(5) to be set 00:17:50.455 11:14:01 -- host/multipath.sh@101 -- # sleep 1 00:17:51.393 11:14:02 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:51.393 11:14:02 -- host/multipath.sh@65 -- # dtrace_pid=84891 00:17:51.393 11:14:02 -- host/multipath.sh@66 -- # sleep 6 00:17:51.393 11:14:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:57.956 11:14:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:57.956 11:14:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:57.956 11:14:08 -- host/multipath.sh@67 -- # active_port=4420 00:17:57.956 11:14:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.956 Attaching 4 probes... 00:17:57.956 @path[10.0.0.2, 4420]: 19812 00:17:57.956 @path[10.0.0.2, 4420]: 20080 00:17:57.956 @path[10.0.0.2, 4420]: 20036 00:17:57.956 @path[10.0.0.2, 4420]: 19338 00:17:57.956 @path[10.0.0.2, 4420]: 19248 00:17:57.956 11:14:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:57.956 11:14:08 -- host/multipath.sh@69 -- # sed -n 1p 00:17:57.956 11:14:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:57.956 11:14:08 -- host/multipath.sh@69 -- # port=4420 00:17:57.956 11:14:08 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:57.956 11:14:08 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:57.956 11:14:08 -- host/multipath.sh@72 -- # kill 84891 00:17:57.956 11:14:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.956 11:14:08 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:57.956 [2024-12-06 11:14:08.901238] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:57.956 11:14:08 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:58.215 11:14:09 -- host/multipath.sh@111 -- # sleep 6 00:18:04.794 11:14:15 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:04.794 11:14:15 -- host/multipath.sh@65 -- # dtrace_pid=85071 00:18:04.794 11:14:15 -- host/multipath.sh@66 -- # sleep 6 00:18:04.794 11:14:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84205 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:10.065 11:14:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.065 11:14:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.642 11:14:21 -- host/multipath.sh@67 -- # active_port=4421 00:18:10.642 11:14:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.642 Attaching 4 probes... 00:18:10.642 @path[10.0.0.2, 4421]: 19575 00:18:10.642 @path[10.0.0.2, 4421]: 19777 00:18:10.642 @path[10.0.0.2, 4421]: 19597 00:18:10.642 @path[10.0.0.2, 4421]: 19870 00:18:10.642 @path[10.0.0.2, 4421]: 19888 00:18:10.642 11:14:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.642 11:14:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:10.642 11:14:21 -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.642 11:14:21 -- host/multipath.sh@69 -- # port=4421 00:18:10.642 11:14:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.642 11:14:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.642 11:14:21 -- host/multipath.sh@72 -- # kill 85071 00:18:10.642 11:14:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.642 11:14:21 -- host/multipath.sh@114 -- # killprocess 84255 00:18:10.642 11:14:21 -- common/autotest_common.sh@936 -- # '[' -z 84255 ']' 00:18:10.642 11:14:21 -- common/autotest_common.sh@940 -- # kill -0 84255 00:18:10.642 11:14:21 -- common/autotest_common.sh@941 -- # uname 00:18:10.642 11:14:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.642 11:14:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84255 00:18:10.642 killing process with pid 84255 00:18:10.642 11:14:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:10.642 11:14:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:10.642 11:14:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84255' 00:18:10.642 11:14:21 -- common/autotest_common.sh@955 -- # kill 84255 00:18:10.642 11:14:21 -- common/autotest_common.sh@960 -- # wait 84255 00:18:10.642 Connection closed with partial response: 00:18:10.642 00:18:10.642 00:18:10.642 11:14:21 -- host/multipath.sh@116 -- # wait 84255 00:18:10.642 11:14:21 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:10.642 [2024-12-06 11:13:24.069052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.642 [2024-12-06 11:13:24.069147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84255 ] 00:18:10.642 [2024-12-06 11:13:24.204937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.642 [2024-12-06 11:13:24.240795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.642 Running I/O for 90 seconds... 00:18:10.642 [2024-12-06 11:13:34.280485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.280599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.280966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.280987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.642 [2024-12-06 11:13:34.281632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.642 [2024-12-06 11:13:34.281670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.642 [2024-12-06 11:13:34.281691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.281961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.281989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.282656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.283012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.283026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.283079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.283094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.643 [2024-12-06 11:13:34.283130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.283167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.643 [2024-12-06 11:13:34.283188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.643 [2024-12-06 11:13:34.283203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.283790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.283975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.283990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.644 [2024-12-06 11:13:34.284675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.644 [2024-12-06 11:13:34.284735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.644 [2024-12-06 11:13:34.284750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.284982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.284998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.285034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.285048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.285069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.285084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.285107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.285121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.286837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.286875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.286911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.286962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.286982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.286997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.287031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.287081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.287116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.287151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.287186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.287221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:34.287290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:34.287328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:34.287348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:40.876423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:40.876495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:40.876643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.645 [2024-12-06 11:13:40.876767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.876982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.645 [2024-12-06 11:13:40.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.645 [2024-12-06 11:13:40.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.877940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.877972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.877992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.878037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.878069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.878108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.878175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.646 [2024-12-06 11:13:40.878207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.646 [2024-12-06 11:13:40.878226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.646 [2024-12-06 11:13:40.878239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.878984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.878998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.879030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.879160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.647 [2024-12-06 11:13:40.879538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.647 [2024-12-06 11:13:40.879634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.647 [2024-12-06 11:13:40.879647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.879712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.879808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.879978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.879998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.880011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.880031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.880044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.881925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.881966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.881994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.882007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.882034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.882048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.882075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.882089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.882116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.648 [2024-12-06 11:13:40.882130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.882157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.882171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.648 [2024-12-06 11:13:40.882198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.648 [2024-12-06 11:13:40.882212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.009718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.009802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.009876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.009896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.009918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.009932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.009951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.009986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.010761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.010984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.010998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.011016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.011029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.011049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.649 [2024-12-06 11:13:48.011062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.011087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.011121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.649 [2024-12-06 11:13:48.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.649 [2024-12-06 11:13:48.011155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.011918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.011971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.011984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.012097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.012135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.012201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.012233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.650 [2024-12-06 11:13:48.012298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.650 [2024-12-06 11:13:48.012498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.650 [2024-12-06 11:13:48.012517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.012953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.012987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.651 [2024-12-06 11:13:48.013453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.013682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.013696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.014649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.014678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.014711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.651 [2024-12-06 11:13:48.014727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.651 [2024-12-06 11:13:48.014755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.014769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.014797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.014811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.014838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.014863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.014892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.014907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.014948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.014976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.014990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.015244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.015322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.015365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:13:48.015466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:13:48.015495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:13:48.015510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.369978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.369990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.652 [2024-12-06 11:14:01.370404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.652 [2024-12-06 11:14:01.370418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.652 [2024-12-06 11:14:01.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.370465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.370491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.370561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.370661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.370688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.370984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.370997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.653 [2024-12-06 11:14:01.371542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.653 [2024-12-06 11:14:01.371674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.653 [2024-12-06 11:14:01.371688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.371802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.371972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.654 [2024-12-06 11:14:01.372719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.654 [2024-12-06 11:14:01.372840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.654 [2024-12-06 11:14:01.372855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.372869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.372884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.372897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.372912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.372940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.372954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.372967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.372994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: 11:14:21 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.655 ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.655 [2024-12-06 11:14:01.373563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.655 [2024-12-06 11:14:01.373592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.655 [2024-12-06 11:14:01.373662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.655 [2024-12-06 11:14:01.373673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20104 len:8 PRP1 0x0 PRP2 0x0 00:18:10.655 [2024-12-06 11:14:01.373686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.655 [2024-12-06 11:14:01.373731] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe13810 was disconnected and freed. reset controller. 00:18:10.655 [2024-12-06 11:14:01.374767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.655 [2024-12-06 11:14:01.374850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9cf30 (9): Bad file descriptor 00:18:10.655 [2024-12-06 11:14:01.375153] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.655 [2024-12-06 11:14:01.375294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.655 [2024-12-06 11:14:01.375355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.655 [2024-12-06 11:14:01.375378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9cf30 with addr=10.0.0.2, port=4421 00:18:10.655 [2024-12-06 11:14:01.375395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9cf30 is same with the state(5) to be set 00:18:10.655 [2024-12-06 11:14:01.375624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9cf30 (9): Bad file descriptor 00:18:10.655 [2024-12-06 11:14:01.375710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.655 [2024-12-06 11:14:01.375731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:10.655 [2024-12-06 11:14:01.375745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.655 [2024-12-06 11:14:01.375777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.655 [2024-12-06 11:14:01.375794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.655 [2024-12-06 11:14:11.411912] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.655 Received shutdown signal, test time was about 55.496814 seconds 00:18:10.655 00:18:10.655 Latency(us) 00:18:10.655 [2024-12-06T11:14:21.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.655 [2024-12-06T11:14:21.802Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.655 Verification LBA range: start 0x0 length 0x4000 00:18:10.655 Nvme0n1 : 55.50 11299.05 44.14 0.00 0.00 11308.77 277.41 7015926.69 00:18:10.655 [2024-12-06T11:14:21.802Z] =================================================================================================================== 00:18:10.655 [2024-12-06T11:14:21.803Z] Total : 11299.05 44.14 0.00 0.00 11308.77 277.41 7015926.69 00:18:10.915 11:14:21 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:10.915 11:14:21 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:10.915 11:14:21 -- host/multipath.sh@125 -- # nvmftestfini 00:18:10.915 11:14:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:10.915 11:14:21 -- nvmf/common.sh@116 -- # sync 00:18:10.915 11:14:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:10.915 11:14:21 -- nvmf/common.sh@119 -- # set +e 00:18:10.915 11:14:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:10.915 11:14:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:10.915 rmmod nvme_tcp 00:18:10.915 rmmod nvme_fabrics 00:18:10.915 rmmod nvme_keyring 00:18:10.915 11:14:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:10.915 11:14:22 -- nvmf/common.sh@123 -- # set -e 00:18:10.915 11:14:22 -- nvmf/common.sh@124 -- # return 0 00:18:10.915 11:14:22 -- nvmf/common.sh@477 -- # '[' -n 84205 ']' 00:18:10.915 11:14:22 -- nvmf/common.sh@478 -- # killprocess 84205 00:18:10.915 11:14:22 -- common/autotest_common.sh@936 -- # '[' -z 84205 ']' 00:18:10.915 11:14:22 -- common/autotest_common.sh@940 -- # kill -0 84205 00:18:10.915 11:14:22 -- common/autotest_common.sh@941 -- # uname 00:18:10.915 11:14:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.915 11:14:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84205 00:18:10.915 11:14:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:10.915 killing process with pid 84205 00:18:10.915 11:14:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:10.915 11:14:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84205' 00:18:10.915 11:14:22 -- common/autotest_common.sh@955 -- # kill 84205 00:18:10.915 11:14:22 -- common/autotest_common.sh@960 -- # wait 84205 00:18:11.174 11:14:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.174 11:14:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.174 11:14:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.174 11:14:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.174 11:14:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.174 11:14:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.174 11:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.174 11:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.174 11:14:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:11.174 00:18:11.174 real 1m1.433s 00:18:11.174 user 2m50.347s 00:18:11.174 sys 0m18.006s 00:18:11.174 11:14:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:11.174 11:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.174 ************************************ 00:18:11.174 END TEST nvmf_multipath 00:18:11.174 ************************************ 00:18:11.175 11:14:22 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:11.175 11:14:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:11.175 11:14:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.175 11:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.175 ************************************ 00:18:11.175 START TEST nvmf_timeout 00:18:11.175 ************************************ 00:18:11.175 11:14:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:11.434 * Looking for test storage... 00:18:11.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:11.434 11:14:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:11.434 11:14:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:11.434 11:14:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:11.434 11:14:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:11.434 11:14:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:11.434 11:14:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:11.434 11:14:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:11.434 11:14:22 -- scripts/common.sh@335 -- # IFS=.-: 00:18:11.434 11:14:22 -- scripts/common.sh@335 -- # read -ra ver1 00:18:11.434 11:14:22 -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.434 11:14:22 -- scripts/common.sh@336 -- # read -ra ver2 00:18:11.434 11:14:22 -- scripts/common.sh@337 -- # local 'op=<' 00:18:11.434 11:14:22 -- scripts/common.sh@339 -- # ver1_l=2 00:18:11.434 11:14:22 -- scripts/common.sh@340 -- # ver2_l=1 00:18:11.434 11:14:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:11.434 11:14:22 -- scripts/common.sh@343 -- # case "$op" in 00:18:11.434 11:14:22 -- scripts/common.sh@344 -- # : 1 00:18:11.434 11:14:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:11.434 11:14:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.434 11:14:22 -- scripts/common.sh@364 -- # decimal 1 00:18:11.434 11:14:22 -- scripts/common.sh@352 -- # local d=1 00:18:11.434 11:14:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.434 11:14:22 -- scripts/common.sh@354 -- # echo 1 00:18:11.434 11:14:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:11.434 11:14:22 -- scripts/common.sh@365 -- # decimal 2 00:18:11.434 11:14:22 -- scripts/common.sh@352 -- # local d=2 00:18:11.434 11:14:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.434 11:14:22 -- scripts/common.sh@354 -- # echo 2 00:18:11.434 11:14:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:11.434 11:14:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:11.434 11:14:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:11.434 11:14:22 -- scripts/common.sh@367 -- # return 0 00:18:11.434 11:14:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.434 11:14:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:11.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.434 --rc genhtml_branch_coverage=1 00:18:11.434 --rc genhtml_function_coverage=1 00:18:11.434 --rc genhtml_legend=1 00:18:11.434 --rc geninfo_all_blocks=1 00:18:11.434 --rc geninfo_unexecuted_blocks=1 00:18:11.434 00:18:11.434 ' 00:18:11.434 11:14:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:11.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.434 --rc genhtml_branch_coverage=1 00:18:11.434 --rc genhtml_function_coverage=1 00:18:11.434 --rc genhtml_legend=1 00:18:11.434 --rc geninfo_all_blocks=1 00:18:11.434 --rc geninfo_unexecuted_blocks=1 00:18:11.434 00:18:11.434 ' 00:18:11.434 11:14:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:11.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.434 --rc genhtml_branch_coverage=1 00:18:11.434 --rc genhtml_function_coverage=1 00:18:11.434 --rc genhtml_legend=1 00:18:11.434 --rc geninfo_all_blocks=1 00:18:11.434 --rc geninfo_unexecuted_blocks=1 00:18:11.434 00:18:11.434 ' 00:18:11.434 11:14:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:11.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.435 --rc genhtml_branch_coverage=1 00:18:11.435 --rc genhtml_function_coverage=1 00:18:11.435 --rc genhtml_legend=1 00:18:11.435 --rc geninfo_all_blocks=1 00:18:11.435 --rc geninfo_unexecuted_blocks=1 00:18:11.435 00:18:11.435 ' 00:18:11.435 11:14:22 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.435 11:14:22 -- nvmf/common.sh@7 -- # uname -s 00:18:11.435 11:14:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.435 11:14:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.435 11:14:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.435 11:14:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.435 11:14:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.435 11:14:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.435 11:14:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.435 11:14:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.435 11:14:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.435 11:14:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:18:11.435 11:14:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:18:11.435 11:14:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.435 11:14:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.435 11:14:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.435 11:14:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.435 11:14:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.435 11:14:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.435 11:14:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.435 11:14:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.435 11:14:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.435 11:14:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.435 11:14:22 -- paths/export.sh@5 -- # export PATH 00:18:11.435 11:14:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.435 11:14:22 -- nvmf/common.sh@46 -- # : 0 00:18:11.435 11:14:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:11.435 11:14:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:11.435 11:14:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:11.435 11:14:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.435 11:14:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.435 11:14:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:11.435 11:14:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:11.435 11:14:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:11.435 11:14:22 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.435 11:14:22 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.435 11:14:22 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.435 11:14:22 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:11.435 11:14:22 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.435 11:14:22 -- host/timeout.sh@19 -- # nvmftestinit 00:18:11.435 11:14:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:11.435 11:14:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.435 11:14:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:11.435 11:14:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:11.435 11:14:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:11.435 11:14:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.435 11:14:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.435 11:14:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.435 11:14:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:11.435 11:14:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:11.435 11:14:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.435 11:14:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.435 11:14:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.435 11:14:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:11.435 11:14:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.435 11:14:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.435 11:14:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.435 11:14:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.435 11:14:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.435 11:14:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.435 11:14:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.435 11:14:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.435 11:14:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:11.435 11:14:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:11.435 Cannot find device "nvmf_tgt_br" 00:18:11.435 11:14:22 -- nvmf/common.sh@154 -- # true 00:18:11.435 11:14:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.435 Cannot find device "nvmf_tgt_br2" 00:18:11.435 11:14:22 -- nvmf/common.sh@155 -- # true 00:18:11.435 11:14:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:11.435 11:14:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:11.435 Cannot find device "nvmf_tgt_br" 00:18:11.435 11:14:22 -- nvmf/common.sh@157 -- # true 00:18:11.435 11:14:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:11.435 Cannot find device "nvmf_tgt_br2" 00:18:11.435 11:14:22 -- nvmf/common.sh@158 -- # true 00:18:11.435 11:14:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:11.694 11:14:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:11.694 11:14:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.694 11:14:22 -- nvmf/common.sh@161 -- # true 00:18:11.694 11:14:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.694 11:14:22 -- nvmf/common.sh@162 -- # true 00:18:11.694 11:14:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.694 11:14:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.694 11:14:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.694 11:14:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.694 11:14:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.694 11:14:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.694 11:14:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.694 11:14:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.694 11:14:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.694 11:14:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:11.694 11:14:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:11.694 11:14:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:11.694 11:14:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:11.694 11:14:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.694 11:14:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.694 11:14:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.694 11:14:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:11.694 11:14:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:11.694 11:14:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.694 11:14:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.694 11:14:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.694 11:14:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.694 11:14:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.694 11:14:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:11.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:11.694 00:18:11.694 --- 10.0.0.2 ping statistics --- 00:18:11.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.694 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:11.694 11:14:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:11.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:18:11.694 00:18:11.694 --- 10.0.0.3 ping statistics --- 00:18:11.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.694 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:11.694 11:14:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:11.695 00:18:11.695 --- 10.0.0.1 ping statistics --- 00:18:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.695 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:11.695 11:14:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.695 11:14:22 -- nvmf/common.sh@421 -- # return 0 00:18:11.695 11:14:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.695 11:14:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.695 11:14:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.695 11:14:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.695 11:14:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.695 11:14:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.695 11:14:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.695 11:14:22 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:11.695 11:14:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.695 11:14:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.695 11:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.695 11:14:22 -- nvmf/common.sh@469 -- # nvmfpid=85386 00:18:11.695 11:14:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:11.695 11:14:22 -- nvmf/common.sh@470 -- # waitforlisten 85386 00:18:11.695 11:14:22 -- common/autotest_common.sh@829 -- # '[' -z 85386 ']' 00:18:11.695 11:14:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.695 11:14:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.695 11:14:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.695 11:14:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.695 11:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.954 [2024-12-06 11:14:22.887909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.954 [2024-12-06 11:14:22.888011] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.954 [2024-12-06 11:14:23.028217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:11.954 [2024-12-06 11:14:23.059175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.954 [2024-12-06 11:14:23.059389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.954 [2024-12-06 11:14:23.059402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.954 [2024-12-06 11:14:23.059411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.954 [2024-12-06 11:14:23.059556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.954 [2024-12-06 11:14:23.059600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.891 11:14:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.891 11:14:23 -- common/autotest_common.sh@862 -- # return 0 00:18:12.891 11:14:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:12.891 11:14:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:12.891 11:14:23 -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 11:14:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.891 11:14:23 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.891 11:14:23 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:13.151 [2024-12-06 11:14:24.187469] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.151 11:14:24 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:13.411 Malloc0 00:18:13.411 11:14:24 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.670 11:14:24 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.930 11:14:24 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.188 [2024-12-06 11:14:25.216542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.188 11:14:25 -- host/timeout.sh@32 -- # bdevperf_pid=85435 00:18:14.188 11:14:25 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:14.188 11:14:25 -- host/timeout.sh@34 -- # waitforlisten 85435 /var/tmp/bdevperf.sock 00:18:14.188 11:14:25 -- common/autotest_common.sh@829 -- # '[' -z 85435 ']' 00:18:14.188 11:14:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.188 11:14:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.188 11:14:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.188 11:14:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.188 11:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:14.188 [2024-12-06 11:14:25.281772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:14.188 [2024-12-06 11:14:25.281892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85435 ] 00:18:14.446 [2024-12-06 11:14:25.423531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.446 [2024-12-06 11:14:25.463801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.379 11:14:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.379 11:14:26 -- common/autotest_common.sh@862 -- # return 0 00:18:15.379 11:14:26 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:15.379 11:14:26 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:15.648 NVMe0n1 00:18:15.648 11:14:26 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.648 11:14:26 -- host/timeout.sh@51 -- # rpc_pid=85459 00:18:15.648 11:14:26 -- host/timeout.sh@53 -- # sleep 1 00:18:15.924 Running I/O for 10 seconds... 00:18:16.859 11:14:27 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.859 [2024-12-06 11:14:27.949604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.859 [2024-12-06 11:14:27.949788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.949965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b9c0 is same with the state(5) to be set 00:18:16.860 [2024-12-06 11:14:27.950042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.860 [2024-12-06 11:14:27.950612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.860 [2024-12-06 11:14:27.950622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.950921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.950989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.950998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.861 [2024-12-06 11:14:27.951435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.861 [2024-12-06 11:14:27.951446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.861 [2024-12-06 11:14:27.951455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.951982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.951990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.952028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.952048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.862 [2024-12-06 11:14:27.952256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.862 [2024-12-06 11:14:27.952282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.862 [2024-12-06 11:14:27.952291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.863 [2024-12-06 11:14:27.952654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.863 [2024-12-06 11:14:27.952694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8acf0 is same with the state(5) to be set 00:18:16.863 [2024-12-06 11:14:27.952718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.863 [2024-12-06 11:14:27.952726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.863 [2024-12-06 11:14:27.952734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127744 len:8 PRP1 0x0 PRP2 0x0 00:18:16.863 [2024-12-06 11:14:27.952744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.863 [2024-12-06 11:14:27.952784] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e8acf0 was disconnected and freed. reset controller. 00:18:16.863 [2024-12-06 11:14:27.953032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.863 [2024-12-06 11:14:27.953111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ac20 (9): Bad file descriptor 00:18:16.863 [2024-12-06 11:14:27.953224] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.863 [2024-12-06 11:14:27.953281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.863 [2024-12-06 11:14:27.953320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.863 [2024-12-06 11:14:27.953335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3ac20 with addr=10.0.0.2, port=4420 00:18:16.863 [2024-12-06 11:14:27.953345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3ac20 is same with the state(5) to be set 00:18:16.863 [2024-12-06 11:14:27.953363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ac20 (9): Bad file descriptor 00:18:16.863 [2024-12-06 11:14:27.953391] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.863 [2024-12-06 11:14:27.953401] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:16.863 [2024-12-06 11:14:27.953411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.863 [2024-12-06 11:14:27.953430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:16.863 [2024-12-06 11:14:27.953441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.863 11:14:27 -- host/timeout.sh@56 -- # sleep 2 00:18:19.393 [2024-12-06 11:14:29.953577] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.393 [2024-12-06 11:14:29.953692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.393 [2024-12-06 11:14:29.953732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.393 [2024-12-06 11:14:29.953748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3ac20 with addr=10.0.0.2, port=4420 00:18:19.393 [2024-12-06 11:14:29.953761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3ac20 is same with the state(5) to be set 00:18:19.393 [2024-12-06 11:14:29.953786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ac20 (9): Bad file descriptor 00:18:19.393 [2024-12-06 11:14:29.953803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.393 [2024-12-06 11:14:29.953812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:19.393 [2024-12-06 11:14:29.953823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.393 [2024-12-06 11:14:29.953850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.393 [2024-12-06 11:14:29.953861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:19.393 11:14:29 -- host/timeout.sh@57 -- # get_controller 00:18:19.393 11:14:29 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.393 11:14:29 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:19.393 11:14:30 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:19.393 11:14:30 -- host/timeout.sh@58 -- # get_bdev 00:18:19.393 11:14:30 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:19.393 11:14:30 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:19.393 11:14:30 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:19.393 11:14:30 -- host/timeout.sh@61 -- # sleep 5 00:18:21.294 [2024-12-06 11:14:31.954010] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.294 [2024-12-06 11:14:31.954108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.294 [2024-12-06 11:14:31.954148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.294 [2024-12-06 11:14:31.954165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3ac20 with addr=10.0.0.2, port=4420 00:18:21.294 [2024-12-06 11:14:31.954178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3ac20 is same with the state(5) to be set 00:18:21.294 [2024-12-06 11:14:31.954236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ac20 (9): Bad file descriptor 00:18:21.294 [2024-12-06 11:14:31.954254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.294 [2024-12-06 11:14:31.954264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.294 [2024-12-06 11:14:31.954275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.294 [2024-12-06 11:14:31.954318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.294 [2024-12-06 11:14:31.954330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.215 [2024-12-06 11:14:33.954358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:23.215 [2024-12-06 11:14:33.954435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:23.215 [2024-12-06 11:14:33.954446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:23.215 [2024-12-06 11:14:33.954457] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:23.215 [2024-12-06 11:14:33.954486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.151 00:18:24.151 Latency(us) 00:18:24.151 [2024-12-06T11:14:35.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.151 [2024-12-06T11:14:35.298Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.151 Verification LBA range: start 0x0 length 0x4000 00:18:24.151 NVMe0n1 : 8.13 1958.10 7.65 15.75 0.00 64764.25 3157.64 7015926.69 00:18:24.151 [2024-12-06T11:14:35.298Z] =================================================================================================================== 00:18:24.151 [2024-12-06T11:14:35.298Z] Total : 1958.10 7.65 15.75 0.00 64764.25 3157.64 7015926.69 00:18:24.151 0 00:18:24.409 11:14:35 -- host/timeout.sh@62 -- # get_controller 00:18:24.409 11:14:35 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.409 11:14:35 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:24.667 11:14:35 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:24.667 11:14:35 -- host/timeout.sh@63 -- # get_bdev 00:18:24.667 11:14:35 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:24.667 11:14:35 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:24.925 11:14:36 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:24.925 11:14:36 -- host/timeout.sh@65 -- # wait 85459 00:18:24.925 11:14:36 -- host/timeout.sh@67 -- # killprocess 85435 00:18:24.925 11:14:36 -- common/autotest_common.sh@936 -- # '[' -z 85435 ']' 00:18:24.925 11:14:36 -- common/autotest_common.sh@940 -- # kill -0 85435 00:18:24.925 11:14:36 -- common/autotest_common.sh@941 -- # uname 00:18:24.925 11:14:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.925 11:14:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85435 00:18:24.925 11:14:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:24.925 11:14:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:24.925 killing process with pid 85435 00:18:24.925 11:14:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85435' 00:18:24.925 11:14:36 -- common/autotest_common.sh@955 -- # kill 85435 00:18:24.925 Received shutdown signal, test time was about 9.231069 seconds 00:18:24.925 00:18:24.925 Latency(us) 00:18:24.925 [2024-12-06T11:14:36.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.925 [2024-12-06T11:14:36.072Z] =================================================================================================================== 00:18:24.925 [2024-12-06T11:14:36.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.925 11:14:36 -- common/autotest_common.sh@960 -- # wait 85435 00:18:25.184 11:14:36 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.442 [2024-12-06 11:14:36.433489] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.443 11:14:36 -- host/timeout.sh@74 -- # bdevperf_pid=85576 00:18:25.443 11:14:36 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:25.443 11:14:36 -- host/timeout.sh@76 -- # waitforlisten 85576 /var/tmp/bdevperf.sock 00:18:25.443 11:14:36 -- common/autotest_common.sh@829 -- # '[' -z 85576 ']' 00:18:25.443 11:14:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.443 11:14:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.443 11:14:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.443 11:14:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.443 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:18:25.443 [2024-12-06 11:14:36.494794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:25.443 [2024-12-06 11:14:36.494887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85576 ] 00:18:25.701 [2024-12-06 11:14:36.627916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.701 [2024-12-06 11:14:36.659512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.636 11:14:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.636 11:14:37 -- common/autotest_common.sh@862 -- # return 0 00:18:26.636 11:14:37 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:26.636 11:14:37 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:26.894 NVMe0n1 00:18:26.894 11:14:38 -- host/timeout.sh@84 -- # rpc_pid=85605 00:18:26.894 11:14:38 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.894 11:14:38 -- host/timeout.sh@86 -- # sleep 1 00:18:27.153 Running I/O for 10 seconds... 00:18:28.091 11:14:39 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.353 [2024-12-06 11:14:39.272908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.272969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.272979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.272986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.272994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176b520 is same with the state(5) to be set 00:18:28.353 [2024-12-06 11:14:39.273114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.353 [2024-12-06 11:14:39.273144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.273900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.354 [2024-12-06 11:14:39.273972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.354 [2024-12-06 11:14:39.274006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.354 [2024-12-06 11:14:39.274030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.355 [2024-12-06 11:14:39.274654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.355 [2024-12-06 11:14:39.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.355 [2024-12-06 11:14:39.274832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.274852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.274872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.274891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.274915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.274935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.274954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.274974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.274985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.274993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.356 [2024-12-06 11:14:39.275695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-12-06 11:14:39.275706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.356 [2024-12-06 11:14:39.275715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.357 [2024-12-06 11:14:39.275853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a15cf0 is same with the state(5) to be set 00:18:28.357 [2024-12-06 11:14:39.275879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:28.357 [2024-12-06 11:14:39.275889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:28.357 [2024-12-06 11:14:39.275897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130912 len:8 PRP1 0x0 PRP2 0x0 00:18:28.357 [2024-12-06 11:14:39.275906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.357 [2024-12-06 11:14:39.275946] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a15cf0 was disconnected and freed. reset controller. 00:18:28.357 [2024-12-06 11:14:39.276167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.357 [2024-12-06 11:14:39.276242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:28.357 [2024-12-06 11:14:39.276336] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.357 [2024-12-06 11:14:39.276394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.357 [2024-12-06 11:14:39.276432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.357 [2024-12-06 11:14:39.276446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:28.357 [2024-12-06 11:14:39.276456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:28.357 [2024-12-06 11:14:39.276474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:28.357 [2024-12-06 11:14:39.276505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:28.357 [2024-12-06 11:14:39.276516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:28.357 [2024-12-06 11:14:39.276526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.357 [2024-12-06 11:14:39.276559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.357 [2024-12-06 11:14:39.276572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.357 11:14:39 -- host/timeout.sh@90 -- # sleep 1 00:18:29.304 [2024-12-06 11:14:40.276690] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.304 [2024-12-06 11:14:40.276792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.304 [2024-12-06 11:14:40.276831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.304 [2024-12-06 11:14:40.276846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:29.304 [2024-12-06 11:14:40.276859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:29.304 [2024-12-06 11:14:40.276883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:29.304 [2024-12-06 11:14:40.276901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:29.304 [2024-12-06 11:14:40.276910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:29.304 [2024-12-06 11:14:40.276920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.304 [2024-12-06 11:14:40.276945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.304 [2024-12-06 11:14:40.276956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.304 11:14:40 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.563 [2024-12-06 11:14:40.546724] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.563 11:14:40 -- host/timeout.sh@92 -- # wait 85605 00:18:30.500 [2024-12-06 11:14:41.297206] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:37.063 00:18:37.063 Latency(us) 00:18:37.063 [2024-12-06T11:14:48.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.063 [2024-12-06T11:14:48.210Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:37.063 Verification LBA range: start 0x0 length 0x4000 00:18:37.063 NVMe0n1 : 10.01 9974.20 38.96 0.00 0.00 12811.88 912.29 3019898.88 00:18:37.063 [2024-12-06T11:14:48.210Z] =================================================================================================================== 00:18:37.063 [2024-12-06T11:14:48.210Z] Total : 9974.20 38.96 0.00 0.00 12811.88 912.29 3019898.88 00:18:37.063 0 00:18:37.063 11:14:48 -- host/timeout.sh@97 -- # rpc_pid=85710 00:18:37.063 11:14:48 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.063 11:14:48 -- host/timeout.sh@98 -- # sleep 1 00:18:37.322 Running I/O for 10 seconds... 00:18:38.274 11:14:49 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.535 [2024-12-06 11:14:49.429213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771f10 is same with the state(5) to be set 00:18:38.535 [2024-12-06 11:14:49.429647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.535 [2024-12-06 11:14:49.429908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.535 [2024-12-06 11:14:49.429918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.429929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.429939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.429951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.429960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.429971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.429981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.429992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.536 [2024-12-06 11:14:49.430563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.536 [2024-12-06 11:14:49.430745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.536 [2024-12-06 11:14:49.430754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.430817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.430922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.430985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.430997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.537 [2024-12-06 11:14:49.431579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.537 [2024-12-06 11:14:49.431601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.537 [2024-12-06 11:14:49.431613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.431921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.431985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.431997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.538 [2024-12-06 11:14:49.432217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:38.538 [2024-12-06 11:14:49.432405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.538 [2024-12-06 11:14:49.432417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acbe40 is same with the state(5) to be set 00:18:38.538 [2024-12-06 11:14:49.432430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:38.538 [2024-12-06 11:14:49.432438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:38.538 [2024-12-06 11:14:49.432447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128048 len:8 PRP1 0x0 PRP2 0x0 00:18:38.538 [2024-12-06 11:14:49.432456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.539 [2024-12-06 11:14:49.432497] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1acbe40 was disconnected and freed. reset controller. 00:18:38.539 [2024-12-06 11:14:49.432752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.539 [2024-12-06 11:14:49.432838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:38.539 [2024-12-06 11:14:49.432942] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.539 [2024-12-06 11:14:49.433006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.539 [2024-12-06 11:14:49.433046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.539 [2024-12-06 11:14:49.433067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:38.539 [2024-12-06 11:14:49.433079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:38.539 [2024-12-06 11:14:49.433097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:38.539 [2024-12-06 11:14:49.433120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:38.539 [2024-12-06 11:14:49.433131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:38.539 [2024-12-06 11:14:49.433141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:38.539 [2024-12-06 11:14:49.433161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:38.539 [2024-12-06 11:14:49.433172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.539 11:14:49 -- host/timeout.sh@101 -- # sleep 3 00:18:39.476 [2024-12-06 11:14:50.433282] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.476 [2024-12-06 11:14:50.433406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.476 [2024-12-06 11:14:50.433447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.476 [2024-12-06 11:14:50.433462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:39.476 [2024-12-06 11:14:50.433475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:39.476 [2024-12-06 11:14:50.433500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:39.476 [2024-12-06 11:14:50.433519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.476 [2024-12-06 11:14:50.433528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:39.476 [2024-12-06 11:14:50.433538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.476 [2024-12-06 11:14:50.433575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.476 [2024-12-06 11:14:50.433587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.413 [2024-12-06 11:14:51.433694] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.413 [2024-12-06 11:14:51.433809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.413 [2024-12-06 11:14:51.433850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.413 [2024-12-06 11:14:51.433866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:40.413 [2024-12-06 11:14:51.433878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:40.413 [2024-12-06 11:14:51.433901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:40.413 [2024-12-06 11:14:51.433920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.413 [2024-12-06 11:14:51.433929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.413 [2024-12-06 11:14:51.433939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.413 [2024-12-06 11:14:51.433978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.413 [2024-12-06 11:14:51.433989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.350 [2024-12-06 11:14:52.435768] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.350 [2024-12-06 11:14:52.435901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.350 [2024-12-06 11:14:52.435946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.350 [2024-12-06 11:14:52.435963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c5c20 with addr=10.0.0.2, port=4420 00:18:41.350 [2024-12-06 11:14:52.435976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c5c20 is same with the state(5) to be set 00:18:41.350 [2024-12-06 11:14:52.436156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c5c20 (9): Bad file descriptor 00:18:41.350 [2024-12-06 11:14:52.436294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.350 [2024-12-06 11:14:52.436308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:41.350 [2024-12-06 11:14:52.436319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.350 [2024-12-06 11:14:52.438801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.350 [2024-12-06 11:14:52.438848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.350 11:14:52 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.609 [2024-12-06 11:14:52.671193] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.609 11:14:52 -- host/timeout.sh@103 -- # wait 85710 00:18:42.546 [2024-12-06 11:14:53.462678] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:47.882 00:18:47.882 Latency(us) 00:18:47.882 [2024-12-06T11:14:59.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.882 [2024-12-06T11:14:59.029Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.882 Verification LBA range: start 0x0 length 0x4000 00:18:47.882 NVMe0n1 : 10.01 8474.42 33.10 6092.30 0.00 8772.89 446.84 3019898.88 00:18:47.882 [2024-12-06T11:14:59.029Z] =================================================================================================================== 00:18:47.882 [2024-12-06T11:14:59.029Z] Total : 8474.42 33.10 6092.30 0.00 8772.89 0.00 3019898.88 00:18:47.882 0 00:18:47.882 11:14:58 -- host/timeout.sh@105 -- # killprocess 85576 00:18:47.882 11:14:58 -- common/autotest_common.sh@936 -- # '[' -z 85576 ']' 00:18:47.882 11:14:58 -- common/autotest_common.sh@940 -- # kill -0 85576 00:18:47.882 11:14:58 -- common/autotest_common.sh@941 -- # uname 00:18:47.882 11:14:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:47.882 11:14:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85576 00:18:47.882 killing process with pid 85576 00:18:47.882 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.882 00:18:47.882 Latency(us) 00:18:47.882 [2024-12-06T11:14:59.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.882 [2024-12-06T11:14:59.029Z] =================================================================================================================== 00:18:47.882 [2024-12-06T11:14:59.029Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.882 11:14:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:47.882 11:14:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:47.882 11:14:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85576' 00:18:47.882 11:14:58 -- common/autotest_common.sh@955 -- # kill 85576 00:18:47.882 11:14:58 -- common/autotest_common.sh@960 -- # wait 85576 00:18:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.882 11:14:58 -- host/timeout.sh@110 -- # bdevperf_pid=85824 00:18:47.882 11:14:58 -- host/timeout.sh@112 -- # waitforlisten 85824 /var/tmp/bdevperf.sock 00:18:47.883 11:14:58 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:47.883 11:14:58 -- common/autotest_common.sh@829 -- # '[' -z 85824 ']' 00:18:47.883 11:14:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.883 11:14:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.883 11:14:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.883 11:14:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.883 11:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:47.883 [2024-12-06 11:14:58.542107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:47.883 [2024-12-06 11:14:58.542434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85824 ] 00:18:47.883 [2024-12-06 11:14:58.684560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.883 [2024-12-06 11:14:58.719150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.450 11:14:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.450 11:14:59 -- common/autotest_common.sh@862 -- # return 0 00:18:48.450 11:14:59 -- host/timeout.sh@116 -- # dtrace_pid=85840 00:18:48.450 11:14:59 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:48.450 11:14:59 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:48.708 11:14:59 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:48.967 NVMe0n1 00:18:49.226 11:15:00 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.226 11:15:00 -- host/timeout.sh@124 -- # rpc_pid=85886 00:18:49.226 11:15:00 -- host/timeout.sh@125 -- # sleep 1 00:18:49.226 Running I/O for 10 seconds... 00:18:50.164 11:15:01 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.459 [2024-12-06 11:15:01.377174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.459 [2024-12-06 11:15:01.377405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.377998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17702f0 is same with the state(5) to be set 00:18:50.460 [2024-12-06 11:15:01.378511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.460 [2024-12-06 11:15:01.378926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.460 [2024-12-06 11:15:01.378938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.378947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.378958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.378968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.378979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.378989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.379989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.379998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.461 [2024-12-06 11:15:01.380274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.461 [2024-12-06 11:15:01.380284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.380977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.380986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.462 [2024-12-06 11:15:01.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.462 [2024-12-06 11:15:01.381362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.463 [2024-12-06 11:15:01.381520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4070 is same with the state(5) to be set 00:18:50.463 [2024-12-06 11:15:01.381544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.463 [2024-12-06 11:15:01.381568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.463 [2024-12-06 11:15:01.381576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:8 PRP1 0x0 PRP2 0x0 00:18:50.463 [2024-12-06 11:15:01.381585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.463 [2024-12-06 11:15:01.381640] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xee4070 was disconnected and freed. reset controller. 00:18:50.463 [2024-12-06 11:15:01.381935] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.463 [2024-12-06 11:15:01.382036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1ea0 (9): Bad file descriptor 00:18:50.463 [2024-12-06 11:15:01.382142] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.463 [2024-12-06 11:15:01.382203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.463 [2024-12-06 11:15:01.382245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.463 [2024-12-06 11:15:01.382261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb1ea0 with addr=10.0.0.2, port=4420 00:18:50.463 [2024-12-06 11:15:01.382272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1ea0 is same with the state(5) to be set 00:18:50.463 [2024-12-06 11:15:01.382291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1ea0 (9): Bad file descriptor 00:18:50.463 [2024-12-06 11:15:01.382307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:50.463 [2024-12-06 11:15:01.382317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:50.463 [2024-12-06 11:15:01.382328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.463 [2024-12-06 11:15:01.382347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:50.463 [2024-12-06 11:15:01.382370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.463 11:15:01 -- host/timeout.sh@128 -- # wait 85886 00:18:52.364 [2024-12-06 11:15:03.382603] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.365 [2024-12-06 11:15:03.382758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.365 [2024-12-06 11:15:03.382828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.365 [2024-12-06 11:15:03.382862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb1ea0 with addr=10.0.0.2, port=4420 00:18:52.365 [2024-12-06 11:15:03.382877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1ea0 is same with the state(5) to be set 00:18:52.365 [2024-12-06 11:15:03.382914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1ea0 (9): Bad file descriptor 00:18:52.365 [2024-12-06 11:15:03.382957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.365 [2024-12-06 11:15:03.382969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.365 [2024-12-06 11:15:03.382980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.365 [2024-12-06 11:15:03.383025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.365 [2024-12-06 11:15:03.383043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:54.271 [2024-12-06 11:15:05.383204] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.271 [2024-12-06 11:15:05.383526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.271 [2024-12-06 11:15:05.383641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.271 [2024-12-06 11:15:05.383662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb1ea0 with addr=10.0.0.2, port=4420 00:18:54.271 [2024-12-06 11:15:05.383676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1ea0 is same with the state(5) to be set 00:18:54.271 [2024-12-06 11:15:05.383717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1ea0 (9): Bad file descriptor 00:18:54.271 [2024-12-06 11:15:05.383736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:54.271 [2024-12-06 11:15:05.383746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:54.271 [2024-12-06 11:15:05.383756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:54.271 [2024-12-06 11:15:05.383783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:54.271 [2024-12-06 11:15:05.383795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.801 [2024-12-06 11:15:07.383862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.801 [2024-12-06 11:15:07.384147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.801 [2024-12-06 11:15:07.384185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.801 [2024-12-06 11:15:07.384198] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:56.801 [2024-12-06 11:15:07.384233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.369 00:18:57.369 Latency(us) 00:18:57.369 [2024-12-06T11:15:08.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.369 [2024-12-06T11:15:08.516Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:57.369 NVMe0n1 : 8.14 2168.24 8.47 15.72 0.00 58512.02 7328.12 7046430.72 00:18:57.369 [2024-12-06T11:15:08.516Z] =================================================================================================================== 00:18:57.369 [2024-12-06T11:15:08.517Z] Total : 2168.24 8.47 15.72 0.00 58512.02 7328.12 7046430.72 00:18:57.370 0 00:18:57.370 11:15:08 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.370 Attaching 5 probes... 00:18:57.370 1334.096197: reset bdev controller NVMe0 00:18:57.370 1334.263639: reconnect bdev controller NVMe0 00:18:57.370 3334.585854: reconnect delay bdev controller NVMe0 00:18:57.370 3334.620719: reconnect bdev controller NVMe0 00:18:57.370 5335.268701: reconnect delay bdev controller NVMe0 00:18:57.370 5335.301609: reconnect bdev controller NVMe0 00:18:57.370 7336.007692: reconnect delay bdev controller NVMe0 00:18:57.370 7336.042505: reconnect bdev controller NVMe0 00:18:57.370 11:15:08 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:57.370 11:15:08 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:57.370 11:15:08 -- host/timeout.sh@136 -- # kill 85840 00:18:57.370 11:15:08 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.370 11:15:08 -- host/timeout.sh@139 -- # killprocess 85824 00:18:57.370 11:15:08 -- common/autotest_common.sh@936 -- # '[' -z 85824 ']' 00:18:57.370 11:15:08 -- common/autotest_common.sh@940 -- # kill -0 85824 00:18:57.370 11:15:08 -- common/autotest_common.sh@941 -- # uname 00:18:57.370 11:15:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.370 11:15:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85824 00:18:57.370 killing process with pid 85824 00:18:57.370 Received shutdown signal, test time was about 8.205378 seconds 00:18:57.370 00:18:57.370 Latency(us) 00:18:57.370 [2024-12-06T11:15:08.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.370 [2024-12-06T11:15:08.517Z] =================================================================================================================== 00:18:57.370 [2024-12-06T11:15:08.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.370 11:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:57.370 11:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:57.370 11:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85824' 00:18:57.370 11:15:08 -- common/autotest_common.sh@955 -- # kill 85824 00:18:57.370 11:15:08 -- common/autotest_common.sh@960 -- # wait 85824 00:18:57.629 11:15:08 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.889 11:15:08 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:57.889 11:15:08 -- host/timeout.sh@145 -- # nvmftestfini 00:18:57.889 11:15:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.889 11:15:08 -- nvmf/common.sh@116 -- # sync 00:18:57.889 11:15:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.889 11:15:08 -- nvmf/common.sh@119 -- # set +e 00:18:57.889 11:15:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.889 11:15:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.889 rmmod nvme_tcp 00:18:57.889 rmmod nvme_fabrics 00:18:57.889 rmmod nvme_keyring 00:18:57.889 11:15:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.889 11:15:08 -- nvmf/common.sh@123 -- # set -e 00:18:57.889 11:15:08 -- nvmf/common.sh@124 -- # return 0 00:18:57.889 11:15:08 -- nvmf/common.sh@477 -- # '[' -n 85386 ']' 00:18:57.889 11:15:08 -- nvmf/common.sh@478 -- # killprocess 85386 00:18:57.889 11:15:08 -- common/autotest_common.sh@936 -- # '[' -z 85386 ']' 00:18:57.889 11:15:08 -- common/autotest_common.sh@940 -- # kill -0 85386 00:18:57.889 11:15:08 -- common/autotest_common.sh@941 -- # uname 00:18:57.889 11:15:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.889 11:15:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85386 00:18:57.889 killing process with pid 85386 00:18:57.889 11:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.889 11:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.889 11:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85386' 00:18:57.889 11:15:08 -- common/autotest_common.sh@955 -- # kill 85386 00:18:57.889 11:15:08 -- common/autotest_common.sh@960 -- # wait 85386 00:18:58.149 11:15:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:58.149 11:15:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:58.149 11:15:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:58.149 11:15:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.149 11:15:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:58.149 11:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.149 11:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.149 11:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.149 11:15:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:58.149 ************************************ 00:18:58.149 END TEST nvmf_timeout 00:18:58.149 ************************************ 00:18:58.149 00:18:58.149 real 0m46.862s 00:18:58.149 user 2m18.104s 00:18:58.149 sys 0m5.216s 00:18:58.149 11:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.149 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:18:58.149 11:15:09 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:58.149 11:15:09 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:58.149 11:15:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.149 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:18:58.149 11:15:09 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:58.149 00:18:58.149 real 10m20.525s 00:18:58.149 user 28m59.918s 00:18:58.149 sys 3m21.997s 00:18:58.149 ************************************ 00:18:58.149 END TEST nvmf_tcp 00:18:58.149 ************************************ 00:18:58.149 11:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.149 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:18:58.149 11:15:09 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:58.149 11:15:09 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:58.149 11:15:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:58.149 11:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.149 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:18:58.149 ************************************ 00:18:58.149 START TEST nvmf_dif 00:18:58.149 ************************************ 00:18:58.149 11:15:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:58.409 * Looking for test storage... 00:18:58.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:58.409 11:15:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:58.409 11:15:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:58.409 11:15:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:58.409 11:15:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:58.409 11:15:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:58.409 11:15:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:58.409 11:15:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:58.409 11:15:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:58.409 11:15:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:58.409 11:15:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.409 11:15:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:58.409 11:15:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:58.409 11:15:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:58.409 11:15:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:58.409 11:15:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:58.409 11:15:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:58.409 11:15:09 -- scripts/common.sh@344 -- # : 1 00:18:58.409 11:15:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:58.409 11:15:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.409 11:15:09 -- scripts/common.sh@364 -- # decimal 1 00:18:58.409 11:15:09 -- scripts/common.sh@352 -- # local d=1 00:18:58.409 11:15:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.409 11:15:09 -- scripts/common.sh@354 -- # echo 1 00:18:58.409 11:15:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:58.409 11:15:09 -- scripts/common.sh@365 -- # decimal 2 00:18:58.409 11:15:09 -- scripts/common.sh@352 -- # local d=2 00:18:58.409 11:15:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.409 11:15:09 -- scripts/common.sh@354 -- # echo 2 00:18:58.409 11:15:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:58.409 11:15:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:58.409 11:15:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:58.409 11:15:09 -- scripts/common.sh@367 -- # return 0 00:18:58.409 11:15:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.409 11:15:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.409 --rc genhtml_branch_coverage=1 00:18:58.409 --rc genhtml_function_coverage=1 00:18:58.409 --rc genhtml_legend=1 00:18:58.409 --rc geninfo_all_blocks=1 00:18:58.409 --rc geninfo_unexecuted_blocks=1 00:18:58.409 00:18:58.409 ' 00:18:58.409 11:15:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.409 --rc genhtml_branch_coverage=1 00:18:58.409 --rc genhtml_function_coverage=1 00:18:58.409 --rc genhtml_legend=1 00:18:58.409 --rc geninfo_all_blocks=1 00:18:58.409 --rc geninfo_unexecuted_blocks=1 00:18:58.409 00:18:58.409 ' 00:18:58.409 11:15:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.409 --rc genhtml_branch_coverage=1 00:18:58.409 --rc genhtml_function_coverage=1 00:18:58.409 --rc genhtml_legend=1 00:18:58.409 --rc geninfo_all_blocks=1 00:18:58.409 --rc geninfo_unexecuted_blocks=1 00:18:58.409 00:18:58.409 ' 00:18:58.409 11:15:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:58.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.409 --rc genhtml_branch_coverage=1 00:18:58.409 --rc genhtml_function_coverage=1 00:18:58.409 --rc genhtml_legend=1 00:18:58.409 --rc geninfo_all_blocks=1 00:18:58.409 --rc geninfo_unexecuted_blocks=1 00:18:58.409 00:18:58.409 ' 00:18:58.409 11:15:09 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.409 11:15:09 -- nvmf/common.sh@7 -- # uname -s 00:18:58.409 11:15:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.409 11:15:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.409 11:15:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.409 11:15:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.409 11:15:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.409 11:15:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.409 11:15:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.409 11:15:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.409 11:15:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.409 11:15:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.409 11:15:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:18:58.409 11:15:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:18:58.409 11:15:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.409 11:15:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.409 11:15:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.409 11:15:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.409 11:15:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.409 11:15:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.409 11:15:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.409 11:15:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.409 11:15:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.409 11:15:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.409 11:15:09 -- paths/export.sh@5 -- # export PATH 00:18:58.409 11:15:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.409 11:15:09 -- nvmf/common.sh@46 -- # : 0 00:18:58.409 11:15:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:58.409 11:15:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:58.409 11:15:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:58.409 11:15:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.409 11:15:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.409 11:15:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:58.409 11:15:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:58.409 11:15:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:58.409 11:15:09 -- target/dif.sh@15 -- # NULL_META=16 00:18:58.409 11:15:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:58.410 11:15:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:58.410 11:15:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:58.410 11:15:09 -- target/dif.sh@135 -- # nvmftestinit 00:18:58.410 11:15:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:58.410 11:15:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.410 11:15:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:58.410 11:15:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:58.410 11:15:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:58.410 11:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.410 11:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:58.410 11:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.410 11:15:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:58.410 11:15:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:58.410 11:15:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:58.410 11:15:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:58.410 11:15:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:58.410 11:15:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:58.410 11:15:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.410 11:15:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.410 11:15:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.410 11:15:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:58.410 11:15:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.410 11:15:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.410 11:15:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.410 11:15:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.410 11:15:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.410 11:15:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.410 11:15:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.410 11:15:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.410 11:15:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:58.410 11:15:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:58.410 Cannot find device "nvmf_tgt_br" 00:18:58.410 11:15:09 -- nvmf/common.sh@154 -- # true 00:18:58.410 11:15:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.410 Cannot find device "nvmf_tgt_br2" 00:18:58.410 11:15:09 -- nvmf/common.sh@155 -- # true 00:18:58.410 11:15:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:58.410 11:15:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:58.668 Cannot find device "nvmf_tgt_br" 00:18:58.668 11:15:09 -- nvmf/common.sh@157 -- # true 00:18:58.668 11:15:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:58.668 Cannot find device "nvmf_tgt_br2" 00:18:58.668 11:15:09 -- nvmf/common.sh@158 -- # true 00:18:58.668 11:15:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:58.668 11:15:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:58.668 11:15:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.668 11:15:09 -- nvmf/common.sh@161 -- # true 00:18:58.668 11:15:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.668 11:15:09 -- nvmf/common.sh@162 -- # true 00:18:58.668 11:15:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.668 11:15:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.668 11:15:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.668 11:15:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.668 11:15:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.668 11:15:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.668 11:15:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.668 11:15:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:58.668 11:15:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:58.668 11:15:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:58.669 11:15:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:58.669 11:15:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:58.669 11:15:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:58.669 11:15:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.669 11:15:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.669 11:15:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.669 11:15:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:58.669 11:15:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:58.669 11:15:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.669 11:15:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.669 11:15:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.669 11:15:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.669 11:15:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.669 11:15:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:58.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:58.669 00:18:58.669 --- 10.0.0.2 ping statistics --- 00:18:58.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.669 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:58.669 11:15:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:58.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:58.669 00:18:58.669 --- 10.0.0.3 ping statistics --- 00:18:58.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.669 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:58.669 11:15:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:58.669 00:18:58.669 --- 10.0.0.1 ping statistics --- 00:18:58.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.669 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:58.669 11:15:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.669 11:15:09 -- nvmf/common.sh@421 -- # return 0 00:18:58.669 11:15:09 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:58.669 11:15:09 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:59.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:59.242 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:59.242 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:59.242 11:15:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.242 11:15:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:59.242 11:15:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:59.242 11:15:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.242 11:15:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:59.242 11:15:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:59.242 11:15:10 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:59.242 11:15:10 -- target/dif.sh@137 -- # nvmfappstart 00:18:59.242 11:15:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:59.242 11:15:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.242 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:18:59.242 11:15:10 -- nvmf/common.sh@469 -- # nvmfpid=86329 00:18:59.242 11:15:10 -- nvmf/common.sh@470 -- # waitforlisten 86329 00:18:59.242 11:15:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:59.242 11:15:10 -- common/autotest_common.sh@829 -- # '[' -z 86329 ']' 00:18:59.242 11:15:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.242 11:15:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.242 11:15:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.242 11:15:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.242 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:18:59.242 [2024-12-06 11:15:10.255870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:59.242 [2024-12-06 11:15:10.255972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.528 [2024-12-06 11:15:10.398723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.528 [2024-12-06 11:15:10.437932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.528 [2024-12-06 11:15:10.438132] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.528 [2024-12-06 11:15:10.438153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.528 [2024-12-06 11:15:10.438164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.528 [2024-12-06 11:15:10.438196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.132 11:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.132 11:15:11 -- common/autotest_common.sh@862 -- # return 0 00:19:00.132 11:15:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.132 11:15:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.132 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 11:15:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.391 11:15:11 -- target/dif.sh@139 -- # create_transport 00:19:00.391 11:15:11 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:00.391 11:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 [2024-12-06 11:15:11.312519] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.391 11:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.391 11:15:11 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:00.391 11:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:00.391 11:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 ************************************ 00:19:00.391 START TEST fio_dif_1_default 00:19:00.391 ************************************ 00:19:00.391 11:15:11 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:19:00.391 11:15:11 -- target/dif.sh@86 -- # create_subsystems 0 00:19:00.391 11:15:11 -- target/dif.sh@28 -- # local sub 00:19:00.391 11:15:11 -- target/dif.sh@30 -- # for sub in "$@" 00:19:00.391 11:15:11 -- target/dif.sh@31 -- # create_subsystem 0 00:19:00.391 11:15:11 -- target/dif.sh@18 -- # local sub_id=0 00:19:00.391 11:15:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:00.391 11:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 bdev_null0 00:19:00.391 11:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.391 11:15:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:00.391 11:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 11:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.391 11:15:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:00.391 11:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 11:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.391 11:15:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:00.391 11:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.391 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 [2024-12-06 11:15:11.356676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.391 11:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.391 11:15:11 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:00.391 11:15:11 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:00.391 11:15:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:00.391 11:15:11 -- nvmf/common.sh@520 -- # config=() 00:19:00.391 11:15:11 -- nvmf/common.sh@520 -- # local subsystem config 00:19:00.391 11:15:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:00.391 11:15:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:00.391 11:15:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:00.391 11:15:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:00.391 { 00:19:00.391 "params": { 00:19:00.391 "name": "Nvme$subsystem", 00:19:00.391 "trtype": "$TEST_TRANSPORT", 00:19:00.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:00.391 "adrfam": "ipv4", 00:19:00.391 "trsvcid": "$NVMF_PORT", 00:19:00.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:00.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:00.391 "hdgst": ${hdgst:-false}, 00:19:00.391 "ddgst": ${ddgst:-false} 00:19:00.391 }, 00:19:00.391 "method": "bdev_nvme_attach_controller" 00:19:00.391 } 00:19:00.391 EOF 00:19:00.391 )") 00:19:00.391 11:15:11 -- target/dif.sh@82 -- # gen_fio_conf 00:19:00.391 11:15:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:00.391 11:15:11 -- target/dif.sh@54 -- # local file 00:19:00.391 11:15:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.391 11:15:11 -- target/dif.sh@56 -- # cat 00:19:00.391 11:15:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:00.391 11:15:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.391 11:15:11 -- common/autotest_common.sh@1330 -- # shift 00:19:00.391 11:15:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:00.391 11:15:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.391 11:15:11 -- nvmf/common.sh@542 -- # cat 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.391 11:15:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:00.391 11:15:11 -- target/dif.sh@72 -- # (( file <= files )) 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:00.391 11:15:11 -- nvmf/common.sh@544 -- # jq . 00:19:00.391 11:15:11 -- nvmf/common.sh@545 -- # IFS=, 00:19:00.391 11:15:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:00.391 "params": { 00:19:00.391 "name": "Nvme0", 00:19:00.391 "trtype": "tcp", 00:19:00.391 "traddr": "10.0.0.2", 00:19:00.391 "adrfam": "ipv4", 00:19:00.391 "trsvcid": "4420", 00:19:00.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:00.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:00.391 "hdgst": false, 00:19:00.391 "ddgst": false 00:19:00.391 }, 00:19:00.391 "method": "bdev_nvme_attach_controller" 00:19:00.391 }' 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:00.391 11:15:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:00.391 11:15:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:00.391 11:15:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:00.391 11:15:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:00.391 11:15:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:00.392 11:15:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:00.650 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:00.650 fio-3.35 00:19:00.650 Starting 1 thread 00:19:00.909 [2024-12-06 11:15:11.895882] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:00.909 [2024-12-06 11:15:11.895958] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:10.885 00:19:10.885 filename0: (groupid=0, jobs=1): err= 0: pid=86401: Fri Dec 6 11:15:22 2024 00:19:10.885 read: IOPS=9593, BW=37.5MiB/s (39.3MB/s)(375MiB/10001msec) 00:19:10.885 slat (nsec): min=6072, max=67946, avg=7989.88, stdev=3506.62 00:19:10.885 clat (usec): min=318, max=2644, avg=393.14, stdev=43.79 00:19:10.885 lat (usec): min=325, max=2670, avg=401.13, stdev=44.57 00:19:10.885 clat percentiles (usec): 00:19:10.885 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 359], 00:19:10.885 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:19:10.885 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:19:10.885 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 578], 00:19:10.885 | 99.99th=[ 1156] 00:19:10.885 bw ( KiB/s): min=37184, max=40000, per=100.00%, avg=38381.47, stdev=894.18, samples=19 00:19:10.885 iops : min= 9296, max=10000, avg=9595.37, stdev=223.54, samples=19 00:19:10.885 lat (usec) : 500=98.31%, 750=1.67%, 1000=0.01% 00:19:10.885 lat (msec) : 2=0.01%, 4=0.01% 00:19:10.885 cpu : usr=84.91%, sys=13.13%, ctx=28, majf=0, minf=0 00:19:10.885 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.885 issued rwts: total=95944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.885 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:10.885 00:19:10.885 Run status group 0 (all jobs): 00:19:10.885 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=375MiB (393MB), run=10001-10001msec 00:19:11.144 11:15:22 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:11.144 11:15:22 -- target/dif.sh@43 -- # local sub 00:19:11.144 11:15:22 -- target/dif.sh@45 -- # for sub in "$@" 00:19:11.144 11:15:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:11.144 11:15:22 -- target/dif.sh@36 -- # local sub_id=0 00:19:11.144 11:15:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 00:19:11.144 real 0m10.849s 00:19:11.144 user 0m9.019s 00:19:11.144 sys 0m1.551s 00:19:11.144 11:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 ************************************ 00:19:11.144 END TEST fio_dif_1_default 00:19:11.144 ************************************ 00:19:11.144 11:15:22 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:11.144 11:15:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.144 11:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 ************************************ 00:19:11.144 START TEST fio_dif_1_multi_subsystems 00:19:11.144 ************************************ 00:19:11.144 11:15:22 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:19:11.144 11:15:22 -- target/dif.sh@92 -- # local files=1 00:19:11.144 11:15:22 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:11.144 11:15:22 -- target/dif.sh@28 -- # local sub 00:19:11.144 11:15:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:11.144 11:15:22 -- target/dif.sh@31 -- # create_subsystem 0 00:19:11.144 11:15:22 -- target/dif.sh@18 -- # local sub_id=0 00:19:11.144 11:15:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 bdev_null0 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 [2024-12-06 11:15:22.264368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:11.144 11:15:22 -- target/dif.sh@31 -- # create_subsystem 1 00:19:11.144 11:15:22 -- target/dif.sh@18 -- # local sub_id=1 00:19:11.144 11:15:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 bdev_null1 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:11.144 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.144 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.144 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.144 11:15:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:11.404 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.404 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.404 11:15:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.404 11:15:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.404 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.404 11:15:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.404 11:15:22 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:11.404 11:15:22 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:11.404 11:15:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:11.404 11:15:22 -- nvmf/common.sh@520 -- # config=() 00:19:11.404 11:15:22 -- nvmf/common.sh@520 -- # local subsystem config 00:19:11.404 11:15:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.404 11:15:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:11.404 11:15:22 -- target/dif.sh@82 -- # gen_fio_conf 00:19:11.404 11:15:22 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.404 11:15:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:11.404 { 00:19:11.404 "params": { 00:19:11.404 "name": "Nvme$subsystem", 00:19:11.404 "trtype": "$TEST_TRANSPORT", 00:19:11.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.404 "adrfam": "ipv4", 00:19:11.404 "trsvcid": "$NVMF_PORT", 00:19:11.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.404 "hdgst": ${hdgst:-false}, 00:19:11.404 "ddgst": ${ddgst:-false} 00:19:11.404 }, 00:19:11.404 "method": "bdev_nvme_attach_controller" 00:19:11.404 } 00:19:11.404 EOF 00:19:11.404 )") 00:19:11.404 11:15:22 -- target/dif.sh@54 -- # local file 00:19:11.404 11:15:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:11.404 11:15:22 -- target/dif.sh@56 -- # cat 00:19:11.404 11:15:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:11.404 11:15:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:11.404 11:15:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.404 11:15:22 -- common/autotest_common.sh@1330 -- # shift 00:19:11.405 11:15:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:11.405 11:15:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:11.405 11:15:22 -- nvmf/common.sh@542 -- # cat 00:19:11.405 11:15:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.405 11:15:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:11.405 11:15:22 -- target/dif.sh@73 -- # cat 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:11.405 11:15:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:11.405 11:15:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:11.405 { 00:19:11.405 "params": { 00:19:11.405 "name": "Nvme$subsystem", 00:19:11.405 "trtype": "$TEST_TRANSPORT", 00:19:11.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.405 "adrfam": "ipv4", 00:19:11.405 "trsvcid": "$NVMF_PORT", 00:19:11.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.405 "hdgst": ${hdgst:-false}, 00:19:11.405 "ddgst": ${ddgst:-false} 00:19:11.405 }, 00:19:11.405 "method": "bdev_nvme_attach_controller" 00:19:11.405 } 00:19:11.405 EOF 00:19:11.405 )") 00:19:11.405 11:15:22 -- target/dif.sh@72 -- # (( file++ )) 00:19:11.405 11:15:22 -- nvmf/common.sh@542 -- # cat 00:19:11.405 11:15:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:11.405 11:15:22 -- nvmf/common.sh@544 -- # jq . 00:19:11.405 11:15:22 -- nvmf/common.sh@545 -- # IFS=, 00:19:11.405 11:15:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:11.405 "params": { 00:19:11.405 "name": "Nvme0", 00:19:11.405 "trtype": "tcp", 00:19:11.405 "traddr": "10.0.0.2", 00:19:11.405 "adrfam": "ipv4", 00:19:11.405 "trsvcid": "4420", 00:19:11.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:11.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:11.405 "hdgst": false, 00:19:11.405 "ddgst": false 00:19:11.405 }, 00:19:11.405 "method": "bdev_nvme_attach_controller" 00:19:11.405 },{ 00:19:11.405 "params": { 00:19:11.405 "name": "Nvme1", 00:19:11.405 "trtype": "tcp", 00:19:11.405 "traddr": "10.0.0.2", 00:19:11.405 "adrfam": "ipv4", 00:19:11.405 "trsvcid": "4420", 00:19:11.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.405 "hdgst": false, 00:19:11.405 "ddgst": false 00:19:11.405 }, 00:19:11.405 "method": "bdev_nvme_attach_controller" 00:19:11.405 }' 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:11.405 11:15:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:11.405 11:15:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:11.405 11:15:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:11.405 11:15:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:11.405 11:15:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:11.405 11:15:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:11.405 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:11.405 fio-3.35 00:19:11.405 Starting 2 threads 00:19:11.973 [2024-12-06 11:15:22.909380] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:11.973 [2024-12-06 11:15:22.909466] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:21.949 00:19:21.949 filename0: (groupid=0, jobs=1): err= 0: pid=86561: Fri Dec 6 11:15:33 2024 00:19:21.949 read: IOPS=5075, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:21.949 slat (nsec): min=6261, max=75144, avg=13549.78, stdev=5139.73 00:19:21.949 clat (usec): min=401, max=3284, avg=751.96, stdev=68.73 00:19:21.949 lat (usec): min=408, max=3310, avg=765.51, stdev=69.43 00:19:21.949 clat percentiles (usec): 00:19:21.949 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:19:21.949 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 766], 00:19:21.949 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:19:21.949 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 963], 00:19:21.949 | 99.99th=[ 1057] 00:19:21.949 bw ( KiB/s): min=19744, max=20969, per=50.03%, avg=20310.85, stdev=312.11, samples=20 00:19:21.949 iops : min= 4936, max= 5242, avg=5077.75, stdev=78.05, samples=20 00:19:21.949 lat (usec) : 500=0.02%, 750=53.00%, 1000=46.96% 00:19:21.949 lat (msec) : 2=0.02%, 4=0.01% 00:19:21.949 cpu : usr=90.47%, sys=8.02%, ctx=18, majf=0, minf=0 00:19:21.949 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.949 issued rwts: total=50756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.949 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:21.949 filename1: (groupid=0, jobs=1): err= 0: pid=86562: Fri Dec 6 11:15:33 2024 00:19:21.949 read: IOPS=5073, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:21.949 slat (usec): min=6, max=158, avg=13.88, stdev= 5.33 00:19:21.949 clat (usec): min=603, max=3111, avg=749.43, stdev=66.81 00:19:21.949 lat (usec): min=610, max=3136, avg=763.31, stdev=67.62 00:19:21.949 clat percentiles (usec): 00:19:21.949 | 1.00th=[ 644], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:19:21.949 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:19:21.949 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 857], 00:19:21.949 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 971], 00:19:21.949 | 99.99th=[ 2180] 00:19:21.949 bw ( KiB/s): min=19744, max=20969, per=50.02%, avg=20304.85, stdev=318.74, samples=20 00:19:21.949 iops : min= 4936, max= 5242, avg=5076.15, stdev=79.69, samples=20 00:19:21.949 lat (usec) : 750=54.43%, 1000=45.54% 00:19:21.949 lat (msec) : 2=0.02%, 4=0.02% 00:19:21.949 cpu : usr=90.20%, sys=8.22%, ctx=27, majf=0, minf=0 00:19:21.949 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.949 issued rwts: total=50740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.949 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:21.949 00:19:21.949 Run status group 0 (all jobs): 00:19:21.949 READ: bw=39.6MiB/s (41.6MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=396MiB (416MB), run=10001-10001msec 00:19:22.209 11:15:33 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:22.209 11:15:33 -- target/dif.sh@43 -- # local sub 00:19:22.209 11:15:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:22.209 11:15:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:22.209 11:15:33 -- target/dif.sh@36 -- # local sub_id=0 00:19:22.209 11:15:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:22.209 11:15:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:22.209 11:15:33 -- target/dif.sh@36 -- # local sub_id=1 00:19:22.209 11:15:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 00:19:22.209 real 0m10.975s 00:19:22.209 user 0m18.710s 00:19:22.209 sys 0m1.856s 00:19:22.209 11:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 ************************************ 00:19:22.209 END TEST fio_dif_1_multi_subsystems 00:19:22.209 ************************************ 00:19:22.209 11:15:33 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:22.209 11:15:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:22.209 11:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 ************************************ 00:19:22.209 START TEST fio_dif_rand_params 00:19:22.209 ************************************ 00:19:22.209 11:15:33 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:22.209 11:15:33 -- target/dif.sh@100 -- # local NULL_DIF 00:19:22.209 11:15:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:22.209 11:15:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:22.209 11:15:33 -- target/dif.sh@103 -- # bs=128k 00:19:22.209 11:15:33 -- target/dif.sh@103 -- # numjobs=3 00:19:22.209 11:15:33 -- target/dif.sh@103 -- # iodepth=3 00:19:22.209 11:15:33 -- target/dif.sh@103 -- # runtime=5 00:19:22.209 11:15:33 -- target/dif.sh@105 -- # create_subsystems 0 00:19:22.209 11:15:33 -- target/dif.sh@28 -- # local sub 00:19:22.209 11:15:33 -- target/dif.sh@30 -- # for sub in "$@" 00:19:22.209 11:15:33 -- target/dif.sh@31 -- # create_subsystem 0 00:19:22.209 11:15:33 -- target/dif.sh@18 -- # local sub_id=0 00:19:22.209 11:15:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 bdev_null0 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:22.209 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.209 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:22.209 [2024-12-06 11:15:33.294798] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.209 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.209 11:15:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:22.209 11:15:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:22.209 11:15:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:22.209 11:15:33 -- nvmf/common.sh@520 -- # config=() 00:19:22.209 11:15:33 -- nvmf/common.sh@520 -- # local subsystem config 00:19:22.209 11:15:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:22.209 11:15:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:22.209 11:15:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:22.209 11:15:33 -- target/dif.sh@82 -- # gen_fio_conf 00:19:22.209 11:15:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:22.209 { 00:19:22.209 "params": { 00:19:22.209 "name": "Nvme$subsystem", 00:19:22.209 "trtype": "$TEST_TRANSPORT", 00:19:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.209 "adrfam": "ipv4", 00:19:22.209 "trsvcid": "$NVMF_PORT", 00:19:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.209 "hdgst": ${hdgst:-false}, 00:19:22.209 "ddgst": ${ddgst:-false} 00:19:22.209 }, 00:19:22.209 "method": "bdev_nvme_attach_controller" 00:19:22.209 } 00:19:22.209 EOF 00:19:22.209 )") 00:19:22.209 11:15:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:22.209 11:15:33 -- target/dif.sh@54 -- # local file 00:19:22.209 11:15:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:22.209 11:15:33 -- target/dif.sh@56 -- # cat 00:19:22.209 11:15:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:22.209 11:15:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:22.209 11:15:33 -- common/autotest_common.sh@1330 -- # shift 00:19:22.209 11:15:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:22.209 11:15:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.209 11:15:33 -- nvmf/common.sh@542 -- # cat 00:19:22.209 11:15:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:22.209 11:15:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:22.210 11:15:33 -- target/dif.sh@72 -- # (( file <= files )) 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:22.210 11:15:33 -- nvmf/common.sh@544 -- # jq . 00:19:22.210 11:15:33 -- nvmf/common.sh@545 -- # IFS=, 00:19:22.210 11:15:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:22.210 "params": { 00:19:22.210 "name": "Nvme0", 00:19:22.210 "trtype": "tcp", 00:19:22.210 "traddr": "10.0.0.2", 00:19:22.210 "adrfam": "ipv4", 00:19:22.210 "trsvcid": "4420", 00:19:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:22.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:22.210 "hdgst": false, 00:19:22.210 "ddgst": false 00:19:22.210 }, 00:19:22.210 "method": "bdev_nvme_attach_controller" 00:19:22.210 }' 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:22.210 11:15:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:22.210 11:15:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:22.210 11:15:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:22.469 11:15:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:22.469 11:15:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:22.469 11:15:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:22.469 11:15:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:22.469 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:22.469 ... 00:19:22.469 fio-3.35 00:19:22.469 Starting 3 threads 00:19:22.727 [2024-12-06 11:15:33.833695] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:22.727 [2024-12-06 11:15:33.833784] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:27.995 00:19:27.995 filename0: (groupid=0, jobs=1): err= 0: pid=86718: Fri Dec 6 11:15:38 2024 00:19:27.995 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(170MiB/5006msec) 00:19:27.995 slat (usec): min=5, max=121, avg=12.09, stdev= 7.61 00:19:27.995 clat (usec): min=10280, max=12283, avg=11020.98, stdev=463.06 00:19:27.995 lat (usec): min=10288, max=12297, avg=11033.07, stdev=463.84 00:19:27.995 clat percentiles (usec): 00:19:27.995 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 00:19:27.995 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:27.995 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:19:27.995 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:19:27.995 | 99.99th=[12256] 00:19:27.995 bw ( KiB/s): min=33024, max=35328, per=33.30%, avg=34713.60, stdev=793.19, samples=10 00:19:27.995 iops : min= 258, max= 276, avg=271.20, stdev= 6.20, samples=10 00:19:27.995 lat (msec) : 20=100.00% 00:19:27.995 cpu : usr=90.61%, sys=8.31%, ctx=156, majf=0, minf=0 00:19:27.995 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.995 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.995 filename0: (groupid=0, jobs=1): err= 0: pid=86719: Fri Dec 6 11:15:38 2024 00:19:27.995 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(170MiB/5005msec) 00:19:27.995 slat (nsec): min=6639, max=55302, avg=11185.89, stdev=5154.16 00:19:27.995 clat (usec): min=10228, max=12214, avg=11020.61, stdev=471.82 00:19:27.995 lat (usec): min=10236, max=12227, avg=11031.80, stdev=471.80 00:19:27.995 clat percentiles (usec): 00:19:27.995 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 00:19:27.995 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:27.995 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:19:27.995 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:19:27.995 | 99.99th=[12256] 00:19:27.995 bw ( KiB/s): min=33090, max=35328, per=33.31%, avg=34720.20, stdev=777.69, samples=10 00:19:27.995 iops : min= 258, max= 276, avg=271.20, stdev= 6.20, samples=10 00:19:27.995 lat (msec) : 20=100.00% 00:19:27.995 cpu : usr=92.03%, sys=7.35%, ctx=5, majf=0, minf=9 00:19:27.995 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.995 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.995 filename0: (groupid=0, jobs=1): err= 0: pid=86720: Fri Dec 6 11:15:38 2024 00:19:27.995 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(170MiB/5003msec) 00:19:27.995 slat (nsec): min=6598, max=51926, avg=11358.51, stdev=5275.17 00:19:27.995 clat (usec): min=8314, max=12234, avg=11015.71, stdev=483.76 00:19:27.995 lat (usec): min=8321, max=12247, avg=11027.07, stdev=484.00 00:19:27.995 clat percentiles (usec): 00:19:27.995 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10552], 00:19:27.995 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:27.995 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:19:27.995 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:19:27.995 | 99.99th=[12256] 00:19:27.995 bw ( KiB/s): min=33024, max=36096, per=33.40%, avg=34816.00, stdev=940.60, samples=9 00:19:27.995 iops : min= 258, max= 282, avg=272.00, stdev= 7.35, samples=9 00:19:27.995 lat (msec) : 10=0.22%, 20=99.78% 00:19:27.995 cpu : usr=90.92%, sys=8.44%, ctx=5, majf=0, minf=9 00:19:27.995 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.995 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.995 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.995 00:19:27.995 Run status group 0 (all jobs): 00:19:27.995 READ: bw=102MiB/s (107MB/s), 33.9MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=510MiB (534MB), run=5003-5006msec 00:19:27.995 11:15:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:27.996 11:15:39 -- target/dif.sh@43 -- # local sub 00:19:27.996 11:15:39 -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.996 11:15:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:27.996 11:15:39 -- target/dif.sh@36 -- # local sub_id=0 00:19:27.996 11:15:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:27.996 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.996 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:27.996 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.996 11:15:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:27.996 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.996 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:27.996 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # bs=4k 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # numjobs=8 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # iodepth=16 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # runtime= 00:19:27.996 11:15:39 -- target/dif.sh@109 -- # files=2 00:19:27.996 11:15:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:27.996 11:15:39 -- target/dif.sh@28 -- # local sub 00:19:27.996 11:15:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.996 11:15:39 -- target/dif.sh@31 -- # create_subsystem 0 00:19:27.996 11:15:39 -- target/dif.sh@18 -- # local sub_id=0 00:19:27.996 11:15:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:27.996 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.996 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.255 bdev_null0 00:19:28.255 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.255 11:15:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:28.255 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.255 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.255 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.255 11:15:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:28.255 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.255 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.255 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.255 11:15:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:28.255 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.255 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.255 [2024-12-06 11:15:39.160878] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.255 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.255 11:15:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:28.255 11:15:39 -- target/dif.sh@31 -- # create_subsystem 1 00:19:28.255 11:15:39 -- target/dif.sh@18 -- # local sub_id=1 00:19:28.255 11:15:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 bdev_null1 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@30 -- # for sub in "$@" 00:19:28.256 11:15:39 -- target/dif.sh@31 -- # create_subsystem 2 00:19:28.256 11:15:39 -- target/dif.sh@18 -- # local sub_id=2 00:19:28.256 11:15:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 bdev_null2 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:28.256 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.256 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:19:28.256 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.256 11:15:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:28.256 11:15:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:28.256 11:15:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:28.256 11:15:39 -- nvmf/common.sh@520 -- # config=() 00:19:28.256 11:15:39 -- nvmf/common.sh@520 -- # local subsystem config 00:19:28.256 11:15:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.256 11:15:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:28.256 11:15:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.256 11:15:39 -- target/dif.sh@82 -- # gen_fio_conf 00:19:28.256 11:15:39 -- target/dif.sh@54 -- # local file 00:19:28.256 11:15:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:28.256 { 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme$subsystem", 00:19:28.256 "trtype": "$TEST_TRANSPORT", 00:19:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "$NVMF_PORT", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:28.256 "hdgst": ${hdgst:-false}, 00:19:28.256 "ddgst": ${ddgst:-false} 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 } 00:19:28.256 EOF 00:19:28.256 )") 00:19:28.256 11:15:39 -- target/dif.sh@56 -- # cat 00:19:28.256 11:15:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:28.256 11:15:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:28.256 11:15:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.256 11:15:39 -- common/autotest_common.sh@1330 -- # shift 00:19:28.256 11:15:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:28.256 11:15:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # cat 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:28.256 11:15:39 -- target/dif.sh@73 -- # cat 00:19:28.256 11:15:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:28.256 { 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme$subsystem", 00:19:28.256 "trtype": "$TEST_TRANSPORT", 00:19:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "$NVMF_PORT", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:28.256 "hdgst": ${hdgst:-false}, 00:19:28.256 "ddgst": ${ddgst:-false} 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 } 00:19:28.256 EOF 00:19:28.256 )") 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # cat 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file++ )) 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:28.256 11:15:39 -- target/dif.sh@73 -- # cat 00:19:28.256 11:15:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:28.256 { 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme$subsystem", 00:19:28.256 "trtype": "$TEST_TRANSPORT", 00:19:28.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "$NVMF_PORT", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:28.256 "hdgst": ${hdgst:-false}, 00:19:28.256 "ddgst": ${ddgst:-false} 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 } 00:19:28.256 EOF 00:19:28.256 )") 00:19:28.256 11:15:39 -- nvmf/common.sh@542 -- # cat 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file++ )) 00:19:28.256 11:15:39 -- target/dif.sh@72 -- # (( file <= files )) 00:19:28.256 11:15:39 -- nvmf/common.sh@544 -- # jq . 00:19:28.256 11:15:39 -- nvmf/common.sh@545 -- # IFS=, 00:19:28.256 11:15:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme0", 00:19:28.256 "trtype": "tcp", 00:19:28.256 "traddr": "10.0.0.2", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "4420", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:28.256 "hdgst": false, 00:19:28.256 "ddgst": false 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 },{ 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme1", 00:19:28.256 "trtype": "tcp", 00:19:28.256 "traddr": "10.0.0.2", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "4420", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.256 "hdgst": false, 00:19:28.256 "ddgst": false 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 },{ 00:19:28.256 "params": { 00:19:28.256 "name": "Nvme2", 00:19:28.256 "trtype": "tcp", 00:19:28.256 "traddr": "10.0.0.2", 00:19:28.256 "adrfam": "ipv4", 00:19:28.256 "trsvcid": "4420", 00:19:28.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:28.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:28.256 "hdgst": false, 00:19:28.256 "ddgst": false 00:19:28.256 }, 00:19:28.256 "method": "bdev_nvme_attach_controller" 00:19:28.256 }' 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:28.256 11:15:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:28.256 11:15:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:28.256 11:15:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:28.256 11:15:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:28.256 11:15:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:28.256 11:15:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:28.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:28.515 ... 00:19:28.515 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:28.515 ... 00:19:28.515 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:28.515 ... 00:19:28.515 fio-3.35 00:19:28.515 Starting 24 threads 00:19:28.773 [2024-12-06 11:15:39.915313] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:28.773 [2024-12-06 11:15:39.915383] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:41.024 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86816: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=205, BW=821KiB/s (841kB/s)(8256KiB/10058msec) 00:19:41.024 slat (usec): min=3, max=7025, avg=22.89, stdev=221.26 00:19:41.024 clat (msec): min=2, max=149, avg=77.76, stdev=25.70 00:19:41.024 lat (msec): min=2, max=150, avg=77.78, stdev=25.70 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 7], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 59], 00:19:41.024 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:19:41.024 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 111], 00:19:41.024 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 146], 00:19:41.024 | 99.99th=[ 150] 00:19:41.024 bw ( KiB/s): min= 616, max= 1536, per=4.15%, avg=819.20, stdev=211.35, samples=20 00:19:41.024 iops : min= 154, max= 384, avg=204.80, stdev=52.84, samples=20 00:19:41.024 lat (msec) : 4=0.78%, 10=1.45%, 20=2.42%, 50=7.32%, 100=62.31% 00:19:41.024 lat (msec) : 250=25.73% 00:19:41.024 cpu : usr=42.88%, sys=2.29%, ctx=1449, majf=0, minf=0 00:19:41.024 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=76.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=89.1%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86817: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=201, BW=804KiB/s (824kB/s)(8056KiB/10014msec) 00:19:41.024 slat (usec): min=8, max=5026, avg=24.59, stdev=186.24 00:19:41.024 clat (msec): min=21, max=132, avg=79.44, stdev=21.41 00:19:41.024 lat (msec): min=21, max=132, avg=79.46, stdev=21.41 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:19:41.024 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:19:41.024 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 113], 00:19:41.024 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:19:41.024 | 99.99th=[ 132] 00:19:41.024 bw ( KiB/s): min= 688, max= 1072, per=4.07%, avg=802.53, stdev=112.22, samples=19 00:19:41.024 iops : min= 172, max= 268, avg=200.63, stdev=28.06, samples=19 00:19:41.024 lat (msec) : 50=11.17%, 100=67.18%, 250=21.65% 00:19:41.024 cpu : usr=36.17%, sys=2.25%, ctx=1218, majf=0, minf=9 00:19:41.024 IO depths : 1=0.1%, 2=1.0%, 4=4.2%, 8=79.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86818: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=210, BW=842KiB/s (862kB/s)(8452KiB/10043msec) 00:19:41.024 slat (usec): min=3, max=8024, avg=17.63, stdev=174.34 00:19:41.024 clat (msec): min=9, max=139, avg=75.85, stdev=24.52 00:19:41.024 lat (msec): min=9, max=139, avg=75.86, stdev=24.51 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 57], 00:19:41.024 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:19:41.024 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 112], 00:19:41.024 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 136], 00:19:41.024 | 99.99th=[ 140] 00:19:41.024 bw ( KiB/s): min= 632, max= 1296, per=4.26%, avg=840.95, stdev=186.12, samples=20 00:19:41.024 iops : min= 158, max= 324, avg=210.20, stdev=46.47, samples=20 00:19:41.024 lat (msec) : 10=1.42%, 20=1.61%, 50=12.07%, 100=60.91%, 250=23.99% 00:19:41.024 cpu : usr=40.34%, sys=2.24%, ctx=1193, majf=0, minf=9 00:19:41.024 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86819: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=210, BW=842KiB/s (862kB/s)(8420KiB/10002msec) 00:19:41.024 slat (usec): min=4, max=8030, avg=23.35, stdev=247.02 00:19:41.024 clat (usec): min=1854, max=119913, avg=75878.57, stdev=23337.41 00:19:41.024 lat (usec): min=1863, max=119931, avg=75901.93, stdev=23330.22 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.024 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:19:41.024 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.024 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:19:41.024 | 99.99th=[ 121] 00:19:41.024 bw ( KiB/s): min= 640, max= 1088, per=4.16%, avg=820.21, stdev=128.79, samples=19 00:19:41.024 iops : min= 160, max= 272, avg=205.05, stdev=32.20, samples=19 00:19:41.024 lat (msec) : 2=0.33%, 4=0.57%, 10=1.52%, 50=12.07%, 100=66.94% 00:19:41.024 lat (msec) : 250=18.57% 00:19:41.024 cpu : usr=32.65%, sys=1.68%, ctx=911, majf=0, minf=9 00:19:41.024 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86820: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=202, BW=811KiB/s (831kB/s)(8144KiB/10037msec) 00:19:41.024 slat (usec): min=6, max=8032, avg=36.32, stdev=386.56 00:19:41.024 clat (msec): min=34, max=143, avg=78.65, stdev=21.25 00:19:41.024 lat (msec): min=34, max=143, avg=78.68, stdev=21.26 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.024 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:19:41.024 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 112], 00:19:41.024 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 134], 99.95th=[ 138], 00:19:41.024 | 99.99th=[ 144] 00:19:41.024 bw ( KiB/s): min= 640, max= 976, per=4.09%, avg=807.80, stdev=120.23, samples=20 00:19:41.024 iops : min= 160, max= 244, avg=201.95, stdev=30.06, samples=20 00:19:41.024 lat (msec) : 50=11.74%, 100=64.78%, 250=23.48% 00:19:41.024 cpu : usr=37.57%, sys=1.84%, ctx=1096, majf=0, minf=9 00:19:41.024 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86821: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=216, BW=866KiB/s (887kB/s)(8664KiB/10002msec) 00:19:41.024 slat (usec): min=3, max=8044, avg=24.24, stdev=251.24 00:19:41.024 clat (usec): min=1956, max=143373, avg=73766.86, stdev=24585.60 00:19:41.024 lat (usec): min=1964, max=143384, avg=73791.09, stdev=24588.48 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 51], 00:19:41.024 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:19:41.024 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.024 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:19:41.024 | 99.99th=[ 144] 00:19:41.024 bw ( KiB/s): min= 680, max= 1080, per=4.30%, avg=848.84, stdev=150.78, samples=19 00:19:41.024 iops : min= 170, max= 270, avg=212.21, stdev=37.69, samples=19 00:19:41.024 lat (msec) : 2=0.14%, 4=0.60%, 10=1.48%, 50=17.45%, 100=61.77% 00:19:41.024 lat (msec) : 250=18.56% 00:19:41.024 cpu : usr=31.62%, sys=1.47%, ctx=904, majf=0, minf=9 00:19:41.024 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:41.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.024 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.024 filename0: (groupid=0, jobs=1): err= 0: pid=86822: Fri Dec 6 11:15:50 2024 00:19:41.024 read: IOPS=205, BW=823KiB/s (843kB/s)(8268KiB/10043msec) 00:19:41.024 slat (usec): min=4, max=8024, avg=23.56, stdev=228.63 00:19:41.024 clat (msec): min=25, max=140, avg=77.55, stdev=21.43 00:19:41.024 lat (msec): min=25, max=140, avg=77.57, stdev=21.43 00:19:41.024 clat percentiles (msec): 00:19:41.024 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.024 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:19:41.024 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 111], 00:19:41.024 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 140], 00:19:41.024 | 99.99th=[ 142] 00:19:41.024 bw ( KiB/s): min= 664, max= 1048, per=4.16%, avg=820.15, stdev=132.62, samples=20 00:19:41.024 iops : min= 166, max= 262, avg=205.00, stdev=33.18, samples=20 00:19:41.025 lat (msec) : 50=11.66%, 100=65.26%, 250=23.08% 00:19:41.025 cpu : usr=38.38%, sys=1.99%, ctx=1332, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename0: (groupid=0, jobs=1): err= 0: pid=86823: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=209, BW=837KiB/s (857kB/s)(8376KiB/10003msec) 00:19:41.025 slat (usec): min=4, max=8039, avg=23.32, stdev=247.82 00:19:41.025 clat (msec): min=3, max=134, avg=76.32, stdev=22.93 00:19:41.025 lat (msec): min=3, max=134, avg=76.34, stdev=22.92 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 6], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.025 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:19:41.025 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.025 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 134], 00:19:41.025 | 99.99th=[ 134] 00:19:41.025 bw ( KiB/s): min= 640, max= 1056, per=4.17%, avg=823.58, stdev=127.23, samples=19 00:19:41.025 iops : min= 160, max= 264, avg=205.89, stdev=31.81, samples=19 00:19:41.025 lat (msec) : 4=0.14%, 10=1.53%, 50=11.99%, 100=67.14%, 250=19.20% 00:19:41.025 cpu : usr=34.60%, sys=1.84%, ctx=1042, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86824: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=203, BW=816KiB/s (835kB/s)(8196KiB/10046msec) 00:19:41.025 slat (usec): min=7, max=8026, avg=20.37, stdev=197.92 00:19:41.025 clat (msec): min=28, max=143, avg=78.28, stdev=22.41 00:19:41.025 lat (msec): min=28, max=143, avg=78.30, stdev=22.41 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:19:41.025 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:19:41.025 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.025 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 134], 00:19:41.025 | 99.99th=[ 144] 00:19:41.025 bw ( KiB/s): min= 616, max= 1064, per=4.12%, avg=813.20, stdev=154.20, samples=20 00:19:41.025 iops : min= 154, max= 266, avg=203.30, stdev=38.55, samples=20 00:19:41.025 lat (msec) : 50=13.18%, 100=63.40%, 250=23.43% 00:19:41.025 cpu : usr=35.14%, sys=1.76%, ctx=989, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86825: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=205, BW=823KiB/s (842kB/s)(8268KiB/10052msec) 00:19:41.025 slat (usec): min=3, max=4030, avg=20.25, stdev=146.03 00:19:41.025 clat (msec): min=3, max=146, avg=77.66, stdev=24.57 00:19:41.025 lat (msec): min=3, max=146, avg=77.68, stdev=24.57 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 9], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.025 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:41.025 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 111], 00:19:41.025 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.025 | 99.99th=[ 148] 00:19:41.025 bw ( KiB/s): min= 608, max= 1368, per=4.16%, avg=820.05, stdev=197.71, samples=20 00:19:41.025 iops : min= 152, max= 342, avg=205.00, stdev=49.41, samples=20 00:19:41.025 lat (msec) : 4=0.10%, 10=2.13%, 20=0.87%, 50=11.03%, 100=61.49% 00:19:41.025 lat (msec) : 250=24.38% 00:19:41.025 cpu : usr=37.03%, sys=1.80%, ctx=1209, majf=0, minf=9 00:19:41.025 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=80.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86826: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=216, BW=868KiB/s (889kB/s)(8684KiB/10006msec) 00:19:41.025 slat (usec): min=3, max=12026, avg=27.93, stdev=332.17 00:19:41.025 clat (msec): min=3, max=120, avg=73.62, stdev=23.38 00:19:41.025 lat (msec): min=3, max=120, avg=73.65, stdev=23.38 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 54], 00:19:41.025 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:19:41.025 | 70.00th=[ 84], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 111], 00:19:41.025 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:19:41.025 | 99.99th=[ 121] 00:19:41.025 bw ( KiB/s): min= 699, max= 1104, per=4.35%, avg=858.26, stdev=153.15, samples=19 00:19:41.025 iops : min= 174, max= 276, avg=214.53, stdev=38.33, samples=19 00:19:41.025 lat (msec) : 4=0.14%, 10=1.20%, 50=16.31%, 100=63.66%, 250=18.70% 00:19:41.025 cpu : usr=41.22%, sys=2.37%, ctx=1403, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86827: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=207, BW=829KiB/s (849kB/s)(8308KiB/10022msec) 00:19:41.025 slat (usec): min=3, max=8030, avg=31.22, stdev=357.56 00:19:41.025 clat (msec): min=24, max=143, avg=77.05, stdev=21.85 00:19:41.025 lat (msec): min=24, max=143, avg=77.08, stdev=21.86 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.025 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:19:41.025 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 111], 00:19:41.025 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 144], 00:19:41.025 | 99.99th=[ 144] 00:19:41.025 bw ( KiB/s): min= 640, max= 1072, per=4.19%, avg=826.11, stdev=132.26, samples=19 00:19:41.025 iops : min= 160, max= 268, avg=206.53, stdev=33.06, samples=19 00:19:41.025 lat (msec) : 50=13.63%, 100=67.12%, 250=19.26% 00:19:41.025 cpu : usr=36.93%, sys=1.86%, ctx=1044, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86828: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=198, BW=794KiB/s (813kB/s)(7972KiB/10046msec) 00:19:41.025 slat (usec): min=3, max=8029, avg=29.34, stdev=323.30 00:19:41.025 clat (msec): min=12, max=143, avg=80.36, stdev=23.89 00:19:41.025 lat (msec): min=12, max=144, avg=80.39, stdev=23.90 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 15], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 62], 00:19:41.025 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 88], 00:19:41.025 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 108], 95.00th=[ 110], 00:19:41.025 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.025 | 99.99th=[ 144] 00:19:41.025 bw ( KiB/s): min= 592, max= 1152, per=4.02%, avg=793.05, stdev=161.48, samples=20 00:19:41.025 iops : min= 148, max= 288, avg=198.25, stdev=40.36, samples=20 00:19:41.025 lat (msec) : 20=2.41%, 50=9.23%, 100=62.87%, 250=25.49% 00:19:41.025 cpu : usr=36.40%, sys=1.80%, ctx=996, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86829: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=205, BW=820KiB/s (840kB/s)(8252KiB/10058msec) 00:19:41.025 slat (usec): min=4, max=8025, avg=33.10, stdev=364.16 00:19:41.025 clat (msec): min=14, max=141, avg=77.78, stdev=21.97 00:19:41.025 lat (msec): min=14, max=141, avg=77.81, stdev=21.98 00:19:41.025 clat percentiles (msec): 00:19:41.025 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.025 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:19:41.025 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 111], 00:19:41.025 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 140], 00:19:41.025 | 99.99th=[ 142] 00:19:41.025 bw ( KiB/s): min= 632, max= 1080, per=4.15%, avg=818.85, stdev=133.52, samples=20 00:19:41.025 iops : min= 158, max= 270, avg=204.70, stdev=33.39, samples=20 00:19:41.025 lat (msec) : 20=0.68%, 50=11.00%, 100=66.94%, 250=21.38% 00:19:41.025 cpu : usr=40.92%, sys=1.80%, ctx=1022, majf=0, minf=9 00:19:41.025 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:41.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.025 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.025 filename1: (groupid=0, jobs=1): err= 0: pid=86830: Fri Dec 6 11:15:50 2024 00:19:41.025 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10032msec) 00:19:41.025 slat (nsec): min=8230, max=37156, avg=15550.33, stdev=4859.81 00:19:41.025 clat (msec): min=28, max=144, avg=77.79, stdev=21.99 00:19:41.025 lat (msec): min=28, max=144, avg=77.80, stdev=21.99 00:19:41.025 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.026 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 592, max= 1048, per=4.14%, avg=817.60, stdev=148.01, samples=20 00:19:41.026 iops : min= 148, max= 262, avg=204.40, stdev=37.00, samples=20 00:19:41.026 lat (msec) : 50=12.18%, 100=67.04%, 250=20.78% 00:19:41.026 cpu : usr=36.24%, sys=1.81%, ctx=1170, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename1: (groupid=0, jobs=1): err= 0: pid=86831: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=204, BW=817KiB/s (837kB/s)(8192KiB/10027msec) 00:19:41.026 slat (usec): min=8, max=8023, avg=19.33, stdev=177.11 00:19:41.026 clat (msec): min=33, max=143, avg=78.19, stdev=22.45 00:19:41.026 lat (msec): min=33, max=143, avg=78.21, stdev=22.45 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 60], 00:19:41.026 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 616, max= 1096, per=4.13%, avg=815.60, stdev=156.63, samples=20 00:19:41.026 iops : min= 154, max= 274, avg=203.90, stdev=39.16, samples=20 00:19:41.026 lat (msec) : 50=15.62%, 100=62.74%, 250=21.63% 00:19:41.026 cpu : usr=31.37%, sys=1.65%, ctx=894, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86832: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=204, BW=818KiB/s (838kB/s)(8200KiB/10026msec) 00:19:41.026 slat (usec): min=4, max=7349, avg=20.70, stdev=184.69 00:19:41.026 clat (msec): min=24, max=144, avg=78.08, stdev=22.85 00:19:41.026 lat (msec): min=24, max=144, avg=78.10, stdev=22.85 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:19:41.026 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 640, max= 1128, per=4.14%, avg=816.05, stdev=161.99, samples=20 00:19:41.026 iops : min= 160, max= 282, avg=204.00, stdev=40.49, samples=20 00:19:41.026 lat (msec) : 50=14.15%, 100=62.49%, 250=23.37% 00:19:41.026 cpu : usr=36.78%, sys=1.74%, ctx=1115, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86833: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=205, BW=820KiB/s (840kB/s)(8248KiB/10054msec) 00:19:41.026 slat (usec): min=4, max=8026, avg=26.49, stdev=304.35 00:19:41.026 clat (msec): min=3, max=146, avg=77.79, stdev=26.10 00:19:41.026 lat (msec): min=3, max=146, avg=77.81, stdev=26.10 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.026 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 148] 00:19:41.026 bw ( KiB/s): min= 600, max= 1648, per=4.15%, avg=818.40, stdev=236.45, samples=20 00:19:41.026 iops : min= 150, max= 412, avg=204.60, stdev=59.11, samples=20 00:19:41.026 lat (msec) : 4=0.87%, 10=2.13%, 20=1.65%, 50=9.26%, 100=60.86% 00:19:41.026 lat (msec) : 250=25.22% 00:19:41.026 cpu : usr=32.92%, sys=1.77%, ctx=911, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=1.0%, 4=3.4%, 8=78.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=88.8%, 8=10.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86834: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=206, BW=828KiB/s (848kB/s)(8292KiB/10018msec) 00:19:41.026 slat (usec): min=3, max=8034, avg=29.66, stdev=272.42 00:19:41.026 clat (msec): min=27, max=143, avg=77.19, stdev=21.05 00:19:41.026 lat (msec): min=27, max=143, avg=77.22, stdev=21.06 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.026 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:19:41.026 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 110], 00:19:41.026 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 688, max= 1128, per=4.18%, avg=825.32, stdev=128.09, samples=19 00:19:41.026 iops : min= 172, max= 282, avg=206.32, stdev=32.03, samples=19 00:19:41.026 lat (msec) : 50=13.02%, 100=66.96%, 250=20.02% 00:19:41.026 cpu : usr=38.32%, sys=2.03%, ctx=1110, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86835: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=207, BW=830KiB/s (849kB/s)(8356KiB/10073msec) 00:19:41.026 slat (usec): min=4, max=8026, avg=24.68, stdev=303.48 00:19:41.026 clat (msec): min=6, max=143, avg=76.89, stdev=24.89 00:19:41.026 lat (msec): min=6, max=143, avg=76.91, stdev=24.89 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 61], 00:19:41.026 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 608, max= 1322, per=4.20%, avg=828.95, stdev=192.59, samples=20 00:19:41.026 iops : min= 152, max= 330, avg=207.20, stdev=48.06, samples=20 00:19:41.026 lat (msec) : 10=1.82%, 20=0.38%, 50=13.93%, 100=61.66%, 250=22.21% 00:19:41.026 cpu : usr=31.60%, sys=1.63%, ctx=887, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86836: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=208, BW=835KiB/s (855kB/s)(8376KiB/10034msec) 00:19:41.026 slat (usec): min=3, max=4036, avg=23.11, stdev=175.45 00:19:41.026 clat (msec): min=20, max=141, avg=76.49, stdev=21.31 00:19:41.026 lat (msec): min=20, max=141, avg=76.51, stdev=21.31 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:19:41.026 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 77], 00:19:41.026 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 110], 00:19:41.026 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 136], 99.95th=[ 142], 00:19:41.026 | 99.99th=[ 142] 00:19:41.026 bw ( KiB/s): min= 640, max= 1024, per=4.21%, avg=831.25, stdev=126.83, samples=20 00:19:41.026 iops : min= 160, max= 256, avg=207.80, stdev=31.70, samples=20 00:19:41.026 lat (msec) : 50=11.22%, 100=67.48%, 250=21.30% 00:19:41.026 cpu : usr=47.66%, sys=2.57%, ctx=1546, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.026 filename2: (groupid=0, jobs=1): err= 0: pid=86837: Fri Dec 6 11:15:50 2024 00:19:41.026 read: IOPS=201, BW=807KiB/s (826kB/s)(8104KiB/10046msec) 00:19:41.026 slat (usec): min=3, max=8030, avg=19.37, stdev=178.32 00:19:41.026 clat (msec): min=16, max=143, avg=79.17, stdev=22.70 00:19:41.026 lat (msec): min=16, max=144, avg=79.19, stdev=22.70 00:19:41.026 clat percentiles (msec): 00:19:41.026 | 1.00th=[ 19], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:19:41.026 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:19:41.026 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 108], 95.00th=[ 109], 00:19:41.026 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:19:41.026 | 99.99th=[ 144] 00:19:41.026 bw ( KiB/s): min= 608, max= 1048, per=4.07%, avg=804.00, stdev=149.50, samples=20 00:19:41.026 iops : min= 152, max= 262, avg=201.00, stdev=37.37, samples=20 00:19:41.026 lat (msec) : 20=1.48%, 50=10.66%, 100=63.97%, 250=23.89% 00:19:41.026 cpu : usr=37.26%, sys=1.93%, ctx=1021, majf=0, minf=9 00:19:41.026 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.026 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.027 filename2: (groupid=0, jobs=1): err= 0: pid=86838: Fri Dec 6 11:15:50 2024 00:19:41.027 read: IOPS=203, BW=814KiB/s (834kB/s)(8156KiB/10017msec) 00:19:41.027 slat (usec): min=3, max=4026, avg=26.43, stdev=210.39 00:19:41.027 clat (msec): min=23, max=133, avg=78.44, stdev=20.23 00:19:41.027 lat (msec): min=23, max=133, avg=78.47, stdev=20.23 00:19:41.027 clat percentiles (msec): 00:19:41.027 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:19:41.027 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:19:41.027 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 111], 00:19:41.027 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 134], 00:19:41.027 | 99.99th=[ 134] 00:19:41.027 bw ( KiB/s): min= 688, max= 1024, per=4.10%, avg=808.58, stdev=106.24, samples=19 00:19:41.027 iops : min= 172, max= 256, avg=202.11, stdev=26.58, samples=19 00:19:41.027 lat (msec) : 50=8.78%, 100=70.28%, 250=20.94% 00:19:41.027 cpu : usr=41.70%, sys=2.19%, ctx=1226, majf=0, minf=9 00:19:41.027 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:41.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.027 complete : 0=0.0%, 4=88.5%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.027 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.027 filename2: (groupid=0, jobs=1): err= 0: pid=86839: Fri Dec 6 11:15:50 2024 00:19:41.027 read: IOPS=204, BW=819KiB/s (839kB/s)(8224KiB/10043msec) 00:19:41.027 slat (usec): min=4, max=4024, avg=20.92, stdev=154.71 00:19:41.027 clat (msec): min=12, max=146, avg=77.95, stdev=23.44 00:19:41.027 lat (msec): min=12, max=147, avg=77.97, stdev=23.44 00:19:41.027 clat percentiles (msec): 00:19:41.027 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:19:41.027 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 80], 00:19:41.027 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 113], 00:19:41.027 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:19:41.027 | 99.99th=[ 148] 00:19:41.027 bw ( KiB/s): min= 608, max= 1154, per=4.14%, avg=817.90, stdev=145.05, samples=20 00:19:41.027 iops : min= 152, max= 288, avg=204.45, stdev=36.20, samples=20 00:19:41.027 lat (msec) : 20=2.33%, 50=8.71%, 100=64.98%, 250=23.98% 00:19:41.027 cpu : usr=43.13%, sys=2.23%, ctx=1564, majf=0, minf=9 00:19:41.027 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:41.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.027 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.027 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:41.027 00:19:41.027 Run status group 0 (all jobs): 00:19:41.027 READ: bw=19.3MiB/s (20.2MB/s), 794KiB/s-868KiB/s (813kB/s-889kB/s), io=194MiB (203MB), run=10002-10073msec 00:19:41.027 11:15:50 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:41.027 11:15:50 -- target/dif.sh@43 -- # local sub 00:19:41.027 11:15:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.027 11:15:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:41.027 11:15:50 -- target/dif.sh@36 -- # local sub_id=0 00:19:41.027 11:15:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.027 11:15:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:41.027 11:15:50 -- target/dif.sh@36 -- # local sub_id=1 00:19:41.027 11:15:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:41.027 11:15:50 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:41.027 11:15:50 -- target/dif.sh@36 -- # local sub_id=2 00:19:41.027 11:15:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # numjobs=2 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # iodepth=8 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # runtime=5 00:19:41.027 11:15:50 -- target/dif.sh@115 -- # files=1 00:19:41.027 11:15:50 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:41.027 11:15:50 -- target/dif.sh@28 -- # local sub 00:19:41.027 11:15:50 -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.027 11:15:50 -- target/dif.sh@31 -- # create_subsystem 0 00:19:41.027 11:15:50 -- target/dif.sh@18 -- # local sub_id=0 00:19:41.027 11:15:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 bdev_null0 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 [2024-12-06 11:15:50.400789] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@30 -- # for sub in "$@" 00:19:41.027 11:15:50 -- target/dif.sh@31 -- # create_subsystem 1 00:19:41.027 11:15:50 -- target/dif.sh@18 -- # local sub_id=1 00:19:41.027 11:15:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 bdev_null1 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.027 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.027 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.027 11:15:50 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:41.027 11:15:50 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:41.027 11:15:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:41.027 11:15:50 -- nvmf/common.sh@520 -- # config=() 00:19:41.027 11:15:50 -- nvmf/common.sh@520 -- # local subsystem config 00:19:41.027 11:15:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:41.027 11:15:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:41.027 { 00:19:41.027 "params": { 00:19:41.027 "name": "Nvme$subsystem", 00:19:41.027 "trtype": "$TEST_TRANSPORT", 00:19:41.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.027 "adrfam": "ipv4", 00:19:41.027 "trsvcid": "$NVMF_PORT", 00:19:41.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.027 "hdgst": ${hdgst:-false}, 00:19:41.027 "ddgst": ${ddgst:-false} 00:19:41.027 }, 00:19:41.027 "method": "bdev_nvme_attach_controller" 00:19:41.027 } 00:19:41.027 EOF 00:19:41.027 )") 00:19:41.027 11:15:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.027 11:15:50 -- target/dif.sh@82 -- # gen_fio_conf 00:19:41.027 11:15:50 -- target/dif.sh@54 -- # local file 00:19:41.027 11:15:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.027 11:15:50 -- target/dif.sh@56 -- # cat 00:19:41.027 11:15:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:41.027 11:15:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.027 11:15:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:41.027 11:15:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.027 11:15:50 -- common/autotest_common.sh@1330 -- # shift 00:19:41.027 11:15:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:41.027 11:15:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.028 11:15:50 -- nvmf/common.sh@542 -- # cat 00:19:41.028 11:15:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:41.028 11:15:50 -- target/dif.sh@72 -- # (( file <= files )) 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.028 11:15:50 -- target/dif.sh@73 -- # cat 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:41.028 11:15:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:41.028 11:15:50 -- target/dif.sh@72 -- # (( file++ )) 00:19:41.028 11:15:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:41.028 { 00:19:41.028 "params": { 00:19:41.028 "name": "Nvme$subsystem", 00:19:41.028 "trtype": "$TEST_TRANSPORT", 00:19:41.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.028 "adrfam": "ipv4", 00:19:41.028 "trsvcid": "$NVMF_PORT", 00:19:41.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.028 "hdgst": ${hdgst:-false}, 00:19:41.028 "ddgst": ${ddgst:-false} 00:19:41.028 }, 00:19:41.028 "method": "bdev_nvme_attach_controller" 00:19:41.028 } 00:19:41.028 EOF 00:19:41.028 )") 00:19:41.028 11:15:50 -- target/dif.sh@72 -- # (( file <= files )) 00:19:41.028 11:15:50 -- nvmf/common.sh@542 -- # cat 00:19:41.028 11:15:50 -- nvmf/common.sh@544 -- # jq . 00:19:41.028 11:15:50 -- nvmf/common.sh@545 -- # IFS=, 00:19:41.028 11:15:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:41.028 "params": { 00:19:41.028 "name": "Nvme0", 00:19:41.028 "trtype": "tcp", 00:19:41.028 "traddr": "10.0.0.2", 00:19:41.028 "adrfam": "ipv4", 00:19:41.028 "trsvcid": "4420", 00:19:41.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:41.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:41.028 "hdgst": false, 00:19:41.028 "ddgst": false 00:19:41.028 }, 00:19:41.028 "method": "bdev_nvme_attach_controller" 00:19:41.028 },{ 00:19:41.028 "params": { 00:19:41.028 "name": "Nvme1", 00:19:41.028 "trtype": "tcp", 00:19:41.028 "traddr": "10.0.0.2", 00:19:41.028 "adrfam": "ipv4", 00:19:41.028 "trsvcid": "4420", 00:19:41.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.028 "hdgst": false, 00:19:41.028 "ddgst": false 00:19:41.028 }, 00:19:41.028 "method": "bdev_nvme_attach_controller" 00:19:41.028 }' 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:41.028 11:15:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:41.028 11:15:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:41.028 11:15:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:41.028 11:15:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:41.028 11:15:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:41.028 11:15:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:41.028 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:41.028 ... 00:19:41.028 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:41.028 ... 00:19:41.028 fio-3.35 00:19:41.028 Starting 4 threads 00:19:41.028 [2024-12-06 11:15:51.008815] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:41.028 [2024-12-06 11:15:51.008887] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:45.218 00:19:45.218 filename0: (groupid=0, jobs=1): err= 0: pid=86983: Fri Dec 6 11:15:56 2024 00:19:45.218 read: IOPS=2320, BW=18.1MiB/s (19.0MB/s)(90.6MiB/5001msec) 00:19:45.218 slat (nsec): min=6954, max=56709, avg=14602.08, stdev=4980.68 00:19:45.218 clat (usec): min=980, max=5475, avg=3412.20, stdev=1041.47 00:19:45.218 lat (usec): min=988, max=5501, avg=3426.80, stdev=1040.38 00:19:45.218 clat percentiles (usec): 00:19:45.218 | 1.00th=[ 1827], 5.00th=[ 1909], 10.00th=[ 2008], 20.00th=[ 2409], 00:19:45.218 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2999], 60.00th=[ 4178], 00:19:45.218 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4817], 00:19:45.218 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 5211], 00:19:45.218 | 99.99th=[ 5276] 00:19:45.218 bw ( KiB/s): min=17442, max=19488, per=26.65%, avg=18547.78, stdev=556.28, samples=9 00:19:45.218 iops : min= 2180, max= 2436, avg=2318.44, stdev=69.60, samples=9 00:19:45.218 lat (usec) : 1000=0.03% 00:19:45.218 lat (msec) : 2=9.98%, 4=45.72%, 10=44.27% 00:19:45.218 cpu : usr=91.86%, sys=7.06%, ctx=49, majf=0, minf=9 00:19:45.218 IO depths : 1=0.1%, 2=0.8%, 4=63.2%, 8=36.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.218 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.218 issued rwts: total=11603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.218 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.219 filename0: (groupid=0, jobs=1): err= 0: pid=86984: Fri Dec 6 11:15:56 2024 00:19:45.219 read: IOPS=2316, BW=18.1MiB/s (19.0MB/s)(90.5MiB/5003msec) 00:19:45.219 slat (nsec): min=7103, max=62426, avg=14750.88, stdev=4732.48 00:19:45.219 clat (usec): min=1404, max=9009, avg=3418.79, stdev=1045.30 00:19:45.219 lat (usec): min=1417, max=9042, avg=3433.54, stdev=1045.07 00:19:45.219 clat percentiles (usec): 00:19:45.219 | 1.00th=[ 1827], 5.00th=[ 1909], 10.00th=[ 2008], 20.00th=[ 2409], 00:19:45.219 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 3032], 60.00th=[ 4178], 00:19:45.219 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4817], 00:19:45.219 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 8717], 00:19:45.219 | 99.99th=[ 8848] 00:19:45.219 bw ( KiB/s): min=17296, max=19440, per=26.63%, avg=18531.20, stdev=543.88, samples=10 00:19:45.219 iops : min= 2162, max= 2430, avg=2316.40, stdev=67.99, samples=10 00:19:45.219 lat (msec) : 2=9.86%, 4=45.91%, 10=44.23% 00:19:45.219 cpu : usr=91.80%, sys=7.20%, ctx=76, majf=0, minf=0 00:19:45.219 IO depths : 1=0.1%, 2=0.8%, 4=63.2%, 8=35.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 issued rwts: total=11589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.219 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=86985: Fri Dec 6 11:15:56 2024 00:19:45.219 read: IOPS=2346, BW=18.3MiB/s (19.2MB/s)(91.7MiB/5003msec) 00:19:45.219 slat (nsec): min=6754, max=53041, avg=11541.33, stdev=4880.82 00:19:45.219 clat (usec): min=715, max=6831, avg=3381.78, stdev=1060.53 00:19:45.219 lat (usec): min=724, max=6857, avg=3393.32, stdev=1060.45 00:19:45.219 clat percentiles (usec): 00:19:45.219 | 1.00th=[ 1663], 5.00th=[ 1876], 10.00th=[ 1975], 20.00th=[ 2376], 00:19:45.219 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2999], 60.00th=[ 4113], 00:19:45.219 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4817], 00:19:45.219 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 6521], 00:19:45.219 | 99.99th=[ 6587] 00:19:45.219 bw ( KiB/s): min=18304, max=19735, per=26.98%, avg=18775.10, stdev=470.32, samples=10 00:19:45.219 iops : min= 2288, max= 2466, avg=2346.80, stdev=58.59, samples=10 00:19:45.219 lat (usec) : 750=0.02%, 1000=0.01% 00:19:45.219 lat (msec) : 2=11.48%, 4=46.20%, 10=42.29% 00:19:45.219 cpu : usr=90.80%, sys=8.20%, ctx=21, majf=0, minf=0 00:19:45.219 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 issued rwts: total=11739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.219 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.219 filename1: (groupid=0, jobs=1): err= 0: pid=86986: Fri Dec 6 11:15:56 2024 00:19:45.219 read: IOPS=1716, BW=13.4MiB/s (14.1MB/s)(67.1MiB/5001msec) 00:19:45.219 slat (usec): min=6, max=207, avg=10.62, stdev= 5.87 00:19:45.219 clat (usec): min=1425, max=6826, avg=4615.61, stdev=363.38 00:19:45.219 lat (usec): min=1434, max=6834, avg=4626.23, stdev=363.22 00:19:45.219 clat percentiles (usec): 00:19:45.219 | 1.00th=[ 3097], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4424], 00:19:45.219 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:19:45.219 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:45.219 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5997], 99.95th=[ 6063], 00:19:45.219 | 99.99th=[ 6849] 00:19:45.219 bw ( KiB/s): min=13312, max=15104, per=19.76%, avg=13752.89, stdev=558.35, samples=9 00:19:45.219 iops : min= 1664, max= 1888, avg=1719.11, stdev=69.79, samples=9 00:19:45.219 lat (msec) : 2=0.44%, 4=1.98%, 10=97.58% 00:19:45.219 cpu : usr=91.00%, sys=7.84%, ctx=104, majf=0, minf=9 00:19:45.219 IO depths : 1=0.1%, 2=24.5%, 4=50.3%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.219 issued rwts: total=8584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.219 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.219 00:19:45.219 Run status group 0 (all jobs): 00:19:45.219 READ: bw=68.0MiB/s (71.3MB/s), 13.4MiB/s-18.3MiB/s (14.1MB/s-19.2MB/s), io=340MiB (356MB), run=5001-5003msec 00:19:45.219 11:15:56 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:45.219 11:15:56 -- target/dif.sh@43 -- # local sub 00:19:45.219 11:15:56 -- target/dif.sh@45 -- # for sub in "$@" 00:19:45.219 11:15:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:45.219 11:15:56 -- target/dif.sh@36 -- # local sub_id=0 00:19:45.219 11:15:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:45.219 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.219 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.219 11:15:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:45.219 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.219 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.219 11:15:56 -- target/dif.sh@45 -- # for sub in "$@" 00:19:45.219 11:15:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:45.219 11:15:56 -- target/dif.sh@36 -- # local sub_id=1 00:19:45.219 11:15:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.219 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.219 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.219 11:15:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:45.219 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.219 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 ************************************ 00:19:45.219 END TEST fio_dif_rand_params 00:19:45.219 ************************************ 00:19:45.219 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.219 00:19:45.219 real 0m23.069s 00:19:45.219 user 2m4.281s 00:19:45.219 sys 0m8.094s 00:19:45.219 11:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:45.219 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 11:15:56 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:45.478 11:15:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:45.478 11:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:45.478 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 ************************************ 00:19:45.478 START TEST fio_dif_digest 00:19:45.478 ************************************ 00:19:45.478 11:15:56 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:45.478 11:15:56 -- target/dif.sh@123 -- # local NULL_DIF 00:19:45.478 11:15:56 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:45.478 11:15:56 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:45.478 11:15:56 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:45.478 11:15:56 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:45.478 11:15:56 -- target/dif.sh@127 -- # numjobs=3 00:19:45.478 11:15:56 -- target/dif.sh@127 -- # iodepth=3 00:19:45.478 11:15:56 -- target/dif.sh@127 -- # runtime=10 00:19:45.478 11:15:56 -- target/dif.sh@128 -- # hdgst=true 00:19:45.478 11:15:56 -- target/dif.sh@128 -- # ddgst=true 00:19:45.478 11:15:56 -- target/dif.sh@130 -- # create_subsystems 0 00:19:45.478 11:15:56 -- target/dif.sh@28 -- # local sub 00:19:45.478 11:15:56 -- target/dif.sh@30 -- # for sub in "$@" 00:19:45.478 11:15:56 -- target/dif.sh@31 -- # create_subsystem 0 00:19:45.478 11:15:56 -- target/dif.sh@18 -- # local sub_id=0 00:19:45.478 11:15:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:45.478 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.478 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 bdev_null0 00:19:45.478 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.478 11:15:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:45.478 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.479 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.479 11:15:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:45.479 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.479 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.479 11:15:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:45.479 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.479 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 [2024-12-06 11:15:56.417182] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.479 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.479 11:15:56 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:45.479 11:15:56 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:45.479 11:15:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:45.479 11:15:56 -- nvmf/common.sh@520 -- # config=() 00:19:45.479 11:15:56 -- nvmf/common.sh@520 -- # local subsystem config 00:19:45.479 11:15:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.479 11:15:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:45.479 11:15:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:45.479 { 00:19:45.479 "params": { 00:19:45.479 "name": "Nvme$subsystem", 00:19:45.479 "trtype": "$TEST_TRANSPORT", 00:19:45.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.479 "adrfam": "ipv4", 00:19:45.479 "trsvcid": "$NVMF_PORT", 00:19:45.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.479 "hdgst": ${hdgst:-false}, 00:19:45.479 "ddgst": ${ddgst:-false} 00:19:45.479 }, 00:19:45.479 "method": "bdev_nvme_attach_controller" 00:19:45.479 } 00:19:45.479 EOF 00:19:45.479 )") 00:19:45.479 11:15:56 -- target/dif.sh@82 -- # gen_fio_conf 00:19:45.479 11:15:56 -- target/dif.sh@54 -- # local file 00:19:45.479 11:15:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.479 11:15:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:45.479 11:15:56 -- target/dif.sh@56 -- # cat 00:19:45.479 11:15:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:45.479 11:15:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:45.479 11:15:56 -- nvmf/common.sh@542 -- # cat 00:19:45.479 11:15:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.479 11:15:56 -- common/autotest_common.sh@1330 -- # shift 00:19:45.479 11:15:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:45.479 11:15:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.479 11:15:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:45.479 11:15:56 -- target/dif.sh@72 -- # (( file <= files )) 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.479 11:15:56 -- nvmf/common.sh@544 -- # jq . 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:45.479 11:15:56 -- nvmf/common.sh@545 -- # IFS=, 00:19:45.479 11:15:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:45.479 "params": { 00:19:45.479 "name": "Nvme0", 00:19:45.479 "trtype": "tcp", 00:19:45.479 "traddr": "10.0.0.2", 00:19:45.479 "adrfam": "ipv4", 00:19:45.479 "trsvcid": "4420", 00:19:45.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:45.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:45.479 "hdgst": true, 00:19:45.479 "ddgst": true 00:19:45.479 }, 00:19:45.479 "method": "bdev_nvme_attach_controller" 00:19:45.479 }' 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:45.479 11:15:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:45.479 11:15:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:45.479 11:15:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:45.479 11:15:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:45.479 11:15:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:45.479 11:15:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.738 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:45.738 ... 00:19:45.738 fio-3.35 00:19:45.738 Starting 3 threads 00:19:45.996 [2024-12-06 11:15:56.948847] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:45.996 [2024-12-06 11:15:56.948910] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:55.993 00:19:55.993 filename0: (groupid=0, jobs=1): err= 0: pid=87092: Fri Dec 6 11:16:07 2024 00:19:55.993 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(296MiB/10006msec) 00:19:55.993 slat (usec): min=7, max=171, avg=15.18, stdev= 7.49 00:19:55.993 clat (usec): min=11551, max=14742, avg=12630.58, stdev=559.85 00:19:55.993 lat (usec): min=11564, max=14771, avg=12645.76, stdev=560.57 00:19:55.993 clat percentiles (usec): 00:19:55.993 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:19:55.994 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:19:55.994 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:19:55.994 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14746], 99.95th=[14746], 00:19:55.994 | 99.99th=[14746] 00:19:55.994 bw ( KiB/s): min=29184, max=31488, per=33.34%, avg=30315.79, stdev=593.15, samples=19 00:19:55.994 iops : min= 228, max= 246, avg=236.84, stdev= 4.63, samples=19 00:19:55.994 lat (msec) : 20=100.00% 00:19:55.994 cpu : usr=90.89%, sys=8.16%, ctx=154, majf=0, minf=0 00:19:55.994 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:55.994 filename0: (groupid=0, jobs=1): err= 0: pid=87093: Fri Dec 6 11:16:07 2024 00:19:55.994 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(296MiB/10005msec) 00:19:55.994 slat (nsec): min=6945, max=50124, avg=10043.59, stdev=4168.42 00:19:55.994 clat (usec): min=9849, max=14479, avg=12639.33, stdev=574.74 00:19:55.994 lat (usec): min=9857, max=14492, avg=12649.37, stdev=575.23 00:19:55.994 clat percentiles (usec): 00:19:55.994 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:19:55.994 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:19:55.994 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 00:19:55.994 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14484], 99.95th=[14484], 00:19:55.994 | 99.99th=[14484] 00:19:55.994 bw ( KiB/s): min=29184, max=31488, per=33.34%, avg=30318.95, stdev=644.30, samples=19 00:19:55.994 iops : min= 228, max= 246, avg=236.84, stdev= 5.05, samples=19 00:19:55.994 lat (msec) : 10=0.13%, 20=99.87% 00:19:55.994 cpu : usr=91.02%, sys=8.33%, ctx=31, majf=0, minf=0 00:19:55.994 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:55.994 filename0: (groupid=0, jobs=1): err= 0: pid=87094: Fri Dec 6 11:16:07 2024 00:19:55.994 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(296MiB/10008msec) 00:19:55.994 slat (nsec): min=5883, max=60419, avg=14831.59, stdev=5419.79 00:19:55.994 clat (usec): min=11556, max=14771, avg=12633.70, stdev=563.13 00:19:55.994 lat (usec): min=11583, max=14797, avg=12648.53, stdev=563.89 00:19:55.994 clat percentiles (usec): 00:19:55.994 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:19:55.994 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:19:55.994 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:19:55.994 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14746], 99.95th=[14746], 00:19:55.994 | 99.99th=[14746] 00:19:55.994 bw ( KiB/s): min=29184, max=31488, per=33.34%, avg=30315.79, stdev=646.03, samples=19 00:19:55.994 iops : min= 228, max= 246, avg=236.84, stdev= 5.05, samples=19 00:19:55.994 lat (msec) : 20=100.00% 00:19:55.994 cpu : usr=90.91%, sys=8.37%, ctx=20, majf=0, minf=0 00:19:55.994 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.994 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:55.994 00:19:55.994 Run status group 0 (all jobs): 00:19:55.994 READ: bw=88.8MiB/s (93.1MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=889MiB (932MB), run=10005-10008msec 00:19:56.253 11:16:07 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:56.253 11:16:07 -- target/dif.sh@43 -- # local sub 00:19:56.253 11:16:07 -- target/dif.sh@45 -- # for sub in "$@" 00:19:56.253 11:16:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:56.253 11:16:07 -- target/dif.sh@36 -- # local sub_id=0 00:19:56.253 11:16:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:56.253 11:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.253 11:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:56.253 11:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.253 11:16:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:56.253 11:16:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.253 11:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:56.253 11:16:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.253 00:19:56.253 real 0m10.856s 00:19:56.253 user 0m27.834s 00:19:56.253 sys 0m2.700s 00:19:56.253 11:16:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.253 ************************************ 00:19:56.253 END TEST fio_dif_digest 00:19:56.253 ************************************ 00:19:56.253 11:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:56.253 11:16:07 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:56.253 11:16:07 -- target/dif.sh@147 -- # nvmftestfini 00:19:56.253 11:16:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.253 11:16:07 -- nvmf/common.sh@116 -- # sync 00:19:56.253 11:16:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.253 11:16:07 -- nvmf/common.sh@119 -- # set +e 00:19:56.253 11:16:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.253 11:16:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.253 rmmod nvme_tcp 00:19:56.253 rmmod nvme_fabrics 00:19:56.253 rmmod nvme_keyring 00:19:56.253 11:16:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.253 11:16:07 -- nvmf/common.sh@123 -- # set -e 00:19:56.253 11:16:07 -- nvmf/common.sh@124 -- # return 0 00:19:56.253 11:16:07 -- nvmf/common.sh@477 -- # '[' -n 86329 ']' 00:19:56.253 11:16:07 -- nvmf/common.sh@478 -- # killprocess 86329 00:19:56.253 11:16:07 -- common/autotest_common.sh@936 -- # '[' -z 86329 ']' 00:19:56.253 11:16:07 -- common/autotest_common.sh@940 -- # kill -0 86329 00:19:56.253 11:16:07 -- common/autotest_common.sh@941 -- # uname 00:19:56.253 11:16:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.253 11:16:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86329 00:19:56.512 killing process with pid 86329 00:19:56.512 11:16:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.512 11:16:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.512 11:16:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86329' 00:19:56.512 11:16:07 -- common/autotest_common.sh@955 -- # kill 86329 00:19:56.512 11:16:07 -- common/autotest_common.sh@960 -- # wait 86329 00:19:56.512 11:16:07 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:56.512 11:16:07 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:56.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.770 Waiting for block devices as requested 00:19:57.029 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.029 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.029 11:16:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:57.029 11:16:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:57.029 11:16:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.029 11:16:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:57.029 11:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.029 11:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:57.029 11:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.029 11:16:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:57.029 ************************************ 00:19:57.029 END TEST nvmf_dif 00:19:57.029 ************************************ 00:19:57.029 00:19:57.029 real 0m58.850s 00:19:57.029 user 3m47.015s 00:19:57.029 sys 0m18.989s 00:19:57.029 11:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:57.029 11:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:57.029 11:16:08 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:57.029 11:16:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:57.029 11:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.029 11:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:57.288 ************************************ 00:19:57.288 START TEST nvmf_abort_qd_sizes 00:19:57.288 ************************************ 00:19:57.288 11:16:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:57.288 * Looking for test storage... 00:19:57.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:57.288 11:16:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:57.288 11:16:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:57.288 11:16:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:57.288 11:16:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:57.288 11:16:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:57.288 11:16:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:57.288 11:16:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:57.288 11:16:08 -- scripts/common.sh@335 -- # IFS=.-: 00:19:57.288 11:16:08 -- scripts/common.sh@335 -- # read -ra ver1 00:19:57.288 11:16:08 -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.288 11:16:08 -- scripts/common.sh@336 -- # read -ra ver2 00:19:57.288 11:16:08 -- scripts/common.sh@337 -- # local 'op=<' 00:19:57.288 11:16:08 -- scripts/common.sh@339 -- # ver1_l=2 00:19:57.288 11:16:08 -- scripts/common.sh@340 -- # ver2_l=1 00:19:57.288 11:16:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:57.288 11:16:08 -- scripts/common.sh@343 -- # case "$op" in 00:19:57.288 11:16:08 -- scripts/common.sh@344 -- # : 1 00:19:57.288 11:16:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:57.288 11:16:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.288 11:16:08 -- scripts/common.sh@364 -- # decimal 1 00:19:57.288 11:16:08 -- scripts/common.sh@352 -- # local d=1 00:19:57.288 11:16:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.288 11:16:08 -- scripts/common.sh@354 -- # echo 1 00:19:57.288 11:16:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:57.288 11:16:08 -- scripts/common.sh@365 -- # decimal 2 00:19:57.288 11:16:08 -- scripts/common.sh@352 -- # local d=2 00:19:57.288 11:16:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.288 11:16:08 -- scripts/common.sh@354 -- # echo 2 00:19:57.288 11:16:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:57.288 11:16:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:57.288 11:16:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:57.288 11:16:08 -- scripts/common.sh@367 -- # return 0 00:19:57.288 11:16:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.288 11:16:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.288 --rc genhtml_branch_coverage=1 00:19:57.288 --rc genhtml_function_coverage=1 00:19:57.288 --rc genhtml_legend=1 00:19:57.288 --rc geninfo_all_blocks=1 00:19:57.288 --rc geninfo_unexecuted_blocks=1 00:19:57.288 00:19:57.288 ' 00:19:57.288 11:16:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.288 --rc genhtml_branch_coverage=1 00:19:57.288 --rc genhtml_function_coverage=1 00:19:57.288 --rc genhtml_legend=1 00:19:57.288 --rc geninfo_all_blocks=1 00:19:57.288 --rc geninfo_unexecuted_blocks=1 00:19:57.288 00:19:57.288 ' 00:19:57.288 11:16:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.288 --rc genhtml_branch_coverage=1 00:19:57.288 --rc genhtml_function_coverage=1 00:19:57.288 --rc genhtml_legend=1 00:19:57.288 --rc geninfo_all_blocks=1 00:19:57.288 --rc geninfo_unexecuted_blocks=1 00:19:57.288 00:19:57.288 ' 00:19:57.288 11:16:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:57.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.289 --rc genhtml_branch_coverage=1 00:19:57.289 --rc genhtml_function_coverage=1 00:19:57.289 --rc genhtml_legend=1 00:19:57.289 --rc geninfo_all_blocks=1 00:19:57.289 --rc geninfo_unexecuted_blocks=1 00:19:57.289 00:19:57.289 ' 00:19:57.289 11:16:08 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.289 11:16:08 -- nvmf/common.sh@7 -- # uname -s 00:19:57.289 11:16:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.289 11:16:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.289 11:16:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.289 11:16:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.289 11:16:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.289 11:16:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.289 11:16:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.289 11:16:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.289 11:16:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.289 11:16:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee 00:19:57.289 11:16:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=6bf11412-71a7-484f-85c4-221cb93c26ee 00:19:57.289 11:16:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.289 11:16:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.289 11:16:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.289 11:16:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.289 11:16:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.289 11:16:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.289 11:16:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.289 11:16:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.289 11:16:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.289 11:16:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.289 11:16:08 -- paths/export.sh@5 -- # export PATH 00:19:57.289 11:16:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.289 11:16:08 -- nvmf/common.sh@46 -- # : 0 00:19:57.289 11:16:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:57.289 11:16:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:57.289 11:16:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:57.289 11:16:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.289 11:16:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.289 11:16:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:57.289 11:16:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:57.289 11:16:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:57.289 11:16:08 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:57.289 11:16:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:57.289 11:16:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.289 11:16:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:57.289 11:16:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:57.289 11:16:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:57.289 11:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.289 11:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:57.289 11:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.289 11:16:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:57.289 11:16:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:57.289 11:16:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.289 11:16:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.289 11:16:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.289 11:16:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:57.289 11:16:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.289 11:16:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.289 11:16:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.289 11:16:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.289 11:16:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.289 11:16:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.289 11:16:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.289 11:16:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.289 11:16:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:57.289 11:16:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:57.547 Cannot find device "nvmf_tgt_br" 00:19:57.547 11:16:08 -- nvmf/common.sh@154 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.547 Cannot find device "nvmf_tgt_br2" 00:19:57.547 11:16:08 -- nvmf/common.sh@155 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:57.547 11:16:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:57.547 Cannot find device "nvmf_tgt_br" 00:19:57.547 11:16:08 -- nvmf/common.sh@157 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:57.547 Cannot find device "nvmf_tgt_br2" 00:19:57.547 11:16:08 -- nvmf/common.sh@158 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:57.547 11:16:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:57.547 11:16:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.547 11:16:08 -- nvmf/common.sh@161 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.547 11:16:08 -- nvmf/common.sh@162 -- # true 00:19:57.547 11:16:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.547 11:16:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.548 11:16:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.548 11:16:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.548 11:16:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.548 11:16:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.548 11:16:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.548 11:16:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.548 11:16:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.548 11:16:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:57.548 11:16:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:57.548 11:16:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:57.548 11:16:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:57.548 11:16:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.548 11:16:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.548 11:16:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.548 11:16:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:57.548 11:16:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:57.548 11:16:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.548 11:16:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.806 11:16:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.807 11:16:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.807 11:16:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.807 11:16:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:57.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:57.807 00:19:57.807 --- 10.0.0.2 ping statistics --- 00:19:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.807 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:57.807 11:16:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:57.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:57.807 00:19:57.807 --- 10.0.0.3 ping statistics --- 00:19:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.807 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:57.807 11:16:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:57.807 00:19:57.807 --- 10.0.0.1 ping statistics --- 00:19:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.807 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:57.807 11:16:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.807 11:16:08 -- nvmf/common.sh@421 -- # return 0 00:19:57.807 11:16:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:57.807 11:16:08 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:58.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.373 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.631 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.631 11:16:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.631 11:16:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:58.631 11:16:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:58.631 11:16:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.631 11:16:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:58.631 11:16:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:58.631 11:16:09 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:58.631 11:16:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:58.631 11:16:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.631 11:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:58.631 11:16:09 -- nvmf/common.sh@469 -- # nvmfpid=87695 00:19:58.631 11:16:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:58.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.631 11:16:09 -- nvmf/common.sh@470 -- # waitforlisten 87695 00:19:58.631 11:16:09 -- common/autotest_common.sh@829 -- # '[' -z 87695 ']' 00:19:58.631 11:16:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.631 11:16:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.631 11:16:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.631 11:16:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.631 11:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:58.631 [2024-12-06 11:16:09.663491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:58.632 [2024-12-06 11:16:09.663852] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.890 [2024-12-06 11:16:09.808784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.890 [2024-12-06 11:16:09.850160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:58.890 [2024-12-06 11:16:09.850612] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.890 [2024-12-06 11:16:09.850773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.890 [2024-12-06 11:16:09.851000] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.890 [2024-12-06 11:16:09.851367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.890 [2024-12-06 11:16:09.851415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.890 [2024-12-06 11:16:09.851492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.890 [2024-12-06 11:16:09.851500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.827 11:16:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.827 11:16:10 -- common/autotest_common.sh@862 -- # return 0 00:19:59.827 11:16:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:59.827 11:16:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 11:16:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:59.827 11:16:10 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:59.827 11:16:10 -- scripts/common.sh@312 -- # local nvmes 00:19:59.827 11:16:10 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:59.827 11:16:10 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:59.827 11:16:10 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:59.827 11:16:10 -- scripts/common.sh@297 -- # local bdf= 00:19:59.827 11:16:10 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:59.827 11:16:10 -- scripts/common.sh@232 -- # local class 00:19:59.827 11:16:10 -- scripts/common.sh@233 -- # local subclass 00:19:59.827 11:16:10 -- scripts/common.sh@234 -- # local progif 00:19:59.827 11:16:10 -- scripts/common.sh@235 -- # printf %02x 1 00:19:59.827 11:16:10 -- scripts/common.sh@235 -- # class=01 00:19:59.827 11:16:10 -- scripts/common.sh@236 -- # printf %02x 8 00:19:59.827 11:16:10 -- scripts/common.sh@236 -- # subclass=08 00:19:59.827 11:16:10 -- scripts/common.sh@237 -- # printf %02x 2 00:19:59.827 11:16:10 -- scripts/common.sh@237 -- # progif=02 00:19:59.827 11:16:10 -- scripts/common.sh@239 -- # hash lspci 00:19:59.827 11:16:10 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:59.827 11:16:10 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:59.827 11:16:10 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:59.827 11:16:10 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:59.827 11:16:10 -- scripts/common.sh@244 -- # tr -d '"' 00:19:59.827 11:16:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:59.827 11:16:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:59.827 11:16:10 -- scripts/common.sh@15 -- # local i 00:19:59.827 11:16:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:59.827 11:16:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:59.827 11:16:10 -- scripts/common.sh@24 -- # return 0 00:19:59.827 11:16:10 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:59.827 11:16:10 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:59.827 11:16:10 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:59.827 11:16:10 -- scripts/common.sh@15 -- # local i 00:19:59.827 11:16:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:59.827 11:16:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:59.827 11:16:10 -- scripts/common.sh@24 -- # return 0 00:19:59.827 11:16:10 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:59.827 11:16:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:59.827 11:16:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:59.827 11:16:10 -- scripts/common.sh@322 -- # uname -s 00:19:59.827 11:16:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:59.827 11:16:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:59.827 11:16:10 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:59.827 11:16:10 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:59.827 11:16:10 -- scripts/common.sh@322 -- # uname -s 00:19:59.827 11:16:10 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:59.827 11:16:10 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:59.827 11:16:10 -- scripts/common.sh@327 -- # (( 2 )) 00:19:59.827 11:16:10 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:59.827 11:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:59.827 11:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 ************************************ 00:19:59.827 START TEST spdk_target_abort 00:19:59.827 ************************************ 00:19:59.827 11:16:10 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:59.827 11:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 spdk_targetn1 00:19:59.827 11:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.827 11:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 [2024-12-06 11:16:10.869516] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.827 11:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:59.827 11:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 11:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:59.827 11:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 11:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:59.827 11:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.827 11:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 [2024-12-06 11:16:10.901740] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.827 11:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.827 11:16:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:59.828 11:16:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:03.114 Initializing NVMe Controllers 00:20:03.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:03.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:03.114 Initialization complete. Launching workers. 00:20:03.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10362, failed: 0 00:20:03.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1034, failed to submit 9328 00:20:03.114 success 819, unsuccess 215, failed 0 00:20:03.114 11:16:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:03.114 11:16:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:06.411 Initializing NVMe Controllers 00:20:06.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:06.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:06.411 Initialization complete. Launching workers. 00:20:06.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8938, failed: 0 00:20:06.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1178, failed to submit 7760 00:20:06.411 success 376, unsuccess 802, failed 0 00:20:06.411 11:16:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:06.411 11:16:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:09.715 Initializing NVMe Controllers 00:20:09.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:09.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:09.715 Initialization complete. Launching workers. 00:20:09.715 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31342, failed: 0 00:20:09.715 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2257, failed to submit 29085 00:20:09.715 success 468, unsuccess 1789, failed 0 00:20:09.715 11:16:20 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:09.715 11:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.715 11:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:09.715 11:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.715 11:16:20 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:09.716 11:16:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.716 11:16:20 -- common/autotest_common.sh@10 -- # set +x 00:20:09.974 11:16:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.974 11:16:20 -- target/abort_qd_sizes.sh@62 -- # killprocess 87695 00:20:09.974 11:16:20 -- common/autotest_common.sh@936 -- # '[' -z 87695 ']' 00:20:09.974 11:16:20 -- common/autotest_common.sh@940 -- # kill -0 87695 00:20:09.974 11:16:20 -- common/autotest_common.sh@941 -- # uname 00:20:09.975 11:16:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.975 11:16:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87695 00:20:09.975 11:16:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.975 11:16:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.975 killing process with pid 87695 00:20:09.975 11:16:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87695' 00:20:09.975 11:16:21 -- common/autotest_common.sh@955 -- # kill 87695 00:20:09.975 11:16:21 -- common/autotest_common.sh@960 -- # wait 87695 00:20:10.234 00:20:10.234 real 0m10.378s 00:20:10.234 user 0m42.686s 00:20:10.234 sys 0m2.088s 00:20:10.234 11:16:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:10.234 11:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:10.234 ************************************ 00:20:10.234 END TEST spdk_target_abort 00:20:10.234 ************************************ 00:20:10.234 11:16:21 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:10.234 11:16:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:10.234 11:16:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.234 11:16:21 -- common/autotest_common.sh@10 -- # set +x 00:20:10.234 ************************************ 00:20:10.234 START TEST kernel_target_abort 00:20:10.234 ************************************ 00:20:10.234 11:16:21 -- common/autotest_common.sh@1114 -- # kernel_target 00:20:10.234 11:16:21 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:10.234 11:16:21 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:10.234 11:16:21 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:10.234 11:16:21 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:10.234 11:16:21 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:10.234 11:16:21 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:10.234 11:16:21 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:10.234 11:16:21 -- nvmf/common.sh@627 -- # local block nvme 00:20:10.234 11:16:21 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:10.234 11:16:21 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:10.234 11:16:21 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:10.234 11:16:21 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:10.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.493 Waiting for block devices as requested 00:20:10.752 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.752 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.752 11:16:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:10.752 11:16:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:10.752 11:16:21 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:10.752 11:16:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:10.752 11:16:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:10.752 No valid GPT data, bailing 00:20:10.752 11:16:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:11.011 11:16:21 -- scripts/common.sh@393 -- # pt= 00:20:11.011 11:16:21 -- scripts/common.sh@394 -- # return 1 00:20:11.011 11:16:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:11.011 11:16:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:11.011 11:16:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:11.011 11:16:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:11.011 11:16:21 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:11.011 11:16:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:11.011 No valid GPT data, bailing 00:20:11.011 11:16:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:11.011 11:16:21 -- scripts/common.sh@393 -- # pt= 00:20:11.011 11:16:21 -- scripts/common.sh@394 -- # return 1 00:20:11.011 11:16:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:11.011 11:16:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:11.011 11:16:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:11.011 11:16:21 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:11.011 11:16:21 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:11.011 11:16:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:11.011 No valid GPT data, bailing 00:20:11.011 11:16:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:11.011 11:16:22 -- scripts/common.sh@393 -- # pt= 00:20:11.011 11:16:22 -- scripts/common.sh@394 -- # return 1 00:20:11.011 11:16:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:11.011 11:16:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:11.011 11:16:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:11.011 11:16:22 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:11.011 11:16:22 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:11.011 11:16:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:11.011 No valid GPT data, bailing 00:20:11.011 11:16:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:11.011 11:16:22 -- scripts/common.sh@393 -- # pt= 00:20:11.011 11:16:22 -- scripts/common.sh@394 -- # return 1 00:20:11.011 11:16:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:11.011 11:16:22 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:11.011 11:16:22 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:11.011 11:16:22 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:11.011 11:16:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:11.011 11:16:22 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:11.011 11:16:22 -- nvmf/common.sh@654 -- # echo 1 00:20:11.011 11:16:22 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:11.011 11:16:22 -- nvmf/common.sh@656 -- # echo 1 00:20:11.011 11:16:22 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:11.011 11:16:22 -- nvmf/common.sh@663 -- # echo tcp 00:20:11.011 11:16:22 -- nvmf/common.sh@664 -- # echo 4420 00:20:11.011 11:16:22 -- nvmf/common.sh@665 -- # echo ipv4 00:20:11.011 11:16:22 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:11.011 11:16:22 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6bf11412-71a7-484f-85c4-221cb93c26ee --hostid=6bf11412-71a7-484f-85c4-221cb93c26ee -a 10.0.0.1 -t tcp -s 4420 00:20:11.011 00:20:11.011 Discovery Log Number of Records 2, Generation counter 2 00:20:11.011 =====Discovery Log Entry 0====== 00:20:11.011 trtype: tcp 00:20:11.011 adrfam: ipv4 00:20:11.011 subtype: current discovery subsystem 00:20:11.011 treq: not specified, sq flow control disable supported 00:20:11.011 portid: 1 00:20:11.011 trsvcid: 4420 00:20:11.011 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:11.011 traddr: 10.0.0.1 00:20:11.011 eflags: none 00:20:11.011 sectype: none 00:20:11.011 =====Discovery Log Entry 1====== 00:20:11.011 trtype: tcp 00:20:11.011 adrfam: ipv4 00:20:11.011 subtype: nvme subsystem 00:20:11.011 treq: not specified, sq flow control disable supported 00:20:11.011 portid: 1 00:20:11.011 trsvcid: 4420 00:20:11.011 subnqn: kernel_target 00:20:11.012 traddr: 10.0.0.1 00:20:11.012 eflags: none 00:20:11.012 sectype: none 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:11.012 11:16:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:14.296 Initializing NVMe Controllers 00:20:14.296 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:14.296 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:14.296 Initialization complete. Launching workers. 00:20:14.296 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29791, failed: 0 00:20:14.296 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29791, failed to submit 0 00:20:14.296 success 0, unsuccess 29791, failed 0 00:20:14.296 11:16:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:14.296 11:16:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:17.599 Initializing NVMe Controllers 00:20:17.599 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:17.599 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:17.599 Initialization complete. Launching workers. 00:20:17.599 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64775, failed: 0 00:20:17.599 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26754, failed to submit 38021 00:20:17.599 success 0, unsuccess 26754, failed 0 00:20:17.599 11:16:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:17.599 11:16:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:20.896 Initializing NVMe Controllers 00:20:20.896 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:20.896 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:20.896 Initialization complete. Launching workers. 00:20:20.896 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77878, failed: 0 00:20:20.896 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19464, failed to submit 58414 00:20:20.896 success 0, unsuccess 19464, failed 0 00:20:20.896 11:16:31 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:20.896 11:16:31 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:20.896 11:16:31 -- nvmf/common.sh@677 -- # echo 0 00:20:20.896 11:16:31 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:20.896 11:16:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:20.896 11:16:31 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:20.896 11:16:31 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:20.896 11:16:31 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:20.896 11:16:31 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:20.896 00:20:20.896 real 0m10.488s 00:20:20.896 user 0m5.609s 00:20:20.896 sys 0m2.291s 00:20:20.896 11:16:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.896 11:16:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.896 ************************************ 00:20:20.896 END TEST kernel_target_abort 00:20:20.896 ************************************ 00:20:20.896 11:16:31 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:20.896 11:16:31 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:20.896 11:16:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:20.896 11:16:31 -- nvmf/common.sh@116 -- # sync 00:20:20.896 11:16:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:20.896 11:16:31 -- nvmf/common.sh@119 -- # set +e 00:20:20.896 11:16:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:20.896 11:16:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:20.896 rmmod nvme_tcp 00:20:20.896 rmmod nvme_fabrics 00:20:20.896 rmmod nvme_keyring 00:20:20.896 11:16:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.896 11:16:31 -- nvmf/common.sh@123 -- # set -e 00:20:20.896 11:16:31 -- nvmf/common.sh@124 -- # return 0 00:20:20.896 11:16:31 -- nvmf/common.sh@477 -- # '[' -n 87695 ']' 00:20:20.896 11:16:31 -- nvmf/common.sh@478 -- # killprocess 87695 00:20:20.896 11:16:31 -- common/autotest_common.sh@936 -- # '[' -z 87695 ']' 00:20:20.896 11:16:31 -- common/autotest_common.sh@940 -- # kill -0 87695 00:20:20.896 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87695) - No such process 00:20:20.896 11:16:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87695 is not found' 00:20:20.896 Process with pid 87695 is not found 00:20:20.896 11:16:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:20.896 11:16:31 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:21.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:21.462 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:21.462 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:21.720 11:16:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.720 11:16:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.720 11:16:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.720 11:16:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.720 11:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.720 11:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:21.720 11:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.720 11:16:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:21.720 00:20:21.720 real 0m24.480s 00:20:21.720 user 0m49.805s 00:20:21.720 sys 0m5.696s 00:20:21.720 ************************************ 00:20:21.720 END TEST nvmf_abort_qd_sizes 00:20:21.720 ************************************ 00:20:21.720 11:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:21.720 11:16:32 -- common/autotest_common.sh@10 -- # set +x 00:20:21.720 11:16:32 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:21.720 11:16:32 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:21.720 11:16:32 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:21.720 11:16:32 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:21.720 11:16:32 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:21.720 11:16:32 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:21.720 11:16:32 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:21.720 11:16:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.720 11:16:32 -- common/autotest_common.sh@10 -- # set +x 00:20:21.720 11:16:32 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:21.720 11:16:32 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:21.720 11:16:32 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:21.720 11:16:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.629 INFO: APP EXITING 00:20:23.629 INFO: killing all VMs 00:20:23.629 INFO: killing vhost app 00:20:23.629 INFO: EXIT DONE 00:20:23.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.156 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:24.156 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:24.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.728 Cleaning 00:20:24.728 Removing: /var/run/dpdk/spdk0/config 00:20:24.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:24.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:24.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:24.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:24.728 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:24.728 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:24.728 Removing: /var/run/dpdk/spdk1/config 00:20:24.728 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:24.728 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:24.728 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:24.728 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:24.728 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:24.728 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:24.728 Removing: /var/run/dpdk/spdk2/config 00:20:24.728 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:24.728 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:24.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:24.985 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:24.985 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:24.985 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:24.985 Removing: /var/run/dpdk/spdk3/config 00:20:24.985 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:24.986 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:24.986 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:24.986 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:24.986 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:24.986 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:24.986 Removing: /var/run/dpdk/spdk4/config 00:20:24.986 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:24.986 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:24.986 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:24.986 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:24.986 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:24.986 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:24.986 Removing: /dev/shm/nvmf_trace.0 00:20:24.986 Removing: /dev/shm/spdk_tgt_trace.pid65862 00:20:24.986 Removing: /var/run/dpdk/spdk0 00:20:24.986 Removing: /var/run/dpdk/spdk1 00:20:24.986 Removing: /var/run/dpdk/spdk2 00:20:24.986 Removing: /var/run/dpdk/spdk3 00:20:24.986 Removing: /var/run/dpdk/spdk4 00:20:24.986 Removing: /var/run/dpdk/spdk_pid65710 00:20:24.986 Removing: /var/run/dpdk/spdk_pid65862 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66115 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66300 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66453 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66519 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66602 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66700 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66783 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66817 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66847 00:20:24.986 Removing: /var/run/dpdk/spdk_pid66916 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67002 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67429 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67481 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67526 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67542 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67604 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67614 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67676 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67692 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67743 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67761 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67801 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67819 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67943 00:20:24.986 Removing: /var/run/dpdk/spdk_pid67975 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68062 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68109 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68128 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68192 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68206 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68235 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68260 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68289 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68303 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68338 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68359 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68388 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68410 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68444 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68458 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68493 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68507 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68541 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68555 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68590 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68604 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68638 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68658 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68687 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68706 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68741 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68755 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68784 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68809 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68838 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68852 00:20:24.986 Removing: /var/run/dpdk/spdk_pid68892 00:20:25.245 Removing: /var/run/dpdk/spdk_pid68906 00:20:25.245 Removing: /var/run/dpdk/spdk_pid68935 00:20:25.245 Removing: /var/run/dpdk/spdk_pid68949 00:20:25.245 Removing: /var/run/dpdk/spdk_pid68989 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69006 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69044 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69066 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69098 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69118 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69151 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69166 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69202 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69273 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69360 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69692 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69710 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69741 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69748 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69767 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69785 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69792 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69810 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69823 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69836 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69849 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69866 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69880 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69888 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69906 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69924 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69932 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69950 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69959 00:20:25.245 Removing: /var/run/dpdk/spdk_pid69976 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70010 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70018 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70046 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70110 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70142 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70146 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70175 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70184 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70186 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70232 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70238 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70265 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70272 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70279 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70287 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70289 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70297 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70304 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70312 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70338 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70359 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70369 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70397 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70407 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70414 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70455 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70462 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70487 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70495 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70502 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70510 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70516 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70519 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70527 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70534 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70610 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70652 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70758 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70795 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70839 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70848 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70868 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70877 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70912 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70921 00:20:25.245 Removing: /var/run/dpdk/spdk_pid70997 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71011 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71060 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71139 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71189 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71218 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71311 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71352 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71383 00:20:25.246 Removing: /var/run/dpdk/spdk_pid71607 00:20:25.504 Removing: /var/run/dpdk/spdk_pid71699 00:20:25.504 Removing: /var/run/dpdk/spdk_pid71721 00:20:25.504 Removing: /var/run/dpdk/spdk_pid72066 00:20:25.504 Removing: /var/run/dpdk/spdk_pid72104 00:20:25.504 Removing: /var/run/dpdk/spdk_pid72421 00:20:25.505 Removing: /var/run/dpdk/spdk_pid72839 00:20:25.505 Removing: /var/run/dpdk/spdk_pid73102 00:20:25.505 Removing: /var/run/dpdk/spdk_pid73857 00:20:25.505 Removing: /var/run/dpdk/spdk_pid74683 00:20:25.505 Removing: /var/run/dpdk/spdk_pid74800 00:20:25.505 Removing: /var/run/dpdk/spdk_pid74862 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76146 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76362 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76668 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76782 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76920 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76940 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76962 00:20:25.505 Removing: /var/run/dpdk/spdk_pid76982 00:20:25.505 Removing: /var/run/dpdk/spdk_pid77066 00:20:25.505 Removing: /var/run/dpdk/spdk_pid77195 00:20:25.505 Removing: /var/run/dpdk/spdk_pid77332 00:20:25.505 Removing: /var/run/dpdk/spdk_pid77407 00:20:25.505 Removing: /var/run/dpdk/spdk_pid77802 00:20:25.505 Removing: /var/run/dpdk/spdk_pid78150 00:20:25.505 Removing: /var/run/dpdk/spdk_pid78157 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80363 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80371 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80648 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80668 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80682 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80712 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80724 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80808 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80810 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80918 00:20:25.505 Removing: /var/run/dpdk/spdk_pid80926 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81034 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81041 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81447 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81491 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81600 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81679 00:20:25.505 Removing: /var/run/dpdk/spdk_pid81984 00:20:25.505 Removing: /var/run/dpdk/spdk_pid82180 00:20:25.505 Removing: /var/run/dpdk/spdk_pid82552 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83078 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83518 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83583 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83630 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83683 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83782 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83829 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83876 00:20:25.505 Removing: /var/run/dpdk/spdk_pid83929 00:20:25.505 Removing: /var/run/dpdk/spdk_pid84255 00:20:25.505 Removing: /var/run/dpdk/spdk_pid85435 00:20:25.505 Removing: /var/run/dpdk/spdk_pid85576 00:20:25.505 Removing: /var/run/dpdk/spdk_pid85824 00:20:25.505 Removing: /var/run/dpdk/spdk_pid86392 00:20:25.505 Removing: /var/run/dpdk/spdk_pid86550 00:20:25.505 Removing: /var/run/dpdk/spdk_pid86708 00:20:25.505 Removing: /var/run/dpdk/spdk_pid86805 00:20:25.505 Removing: /var/run/dpdk/spdk_pid86979 00:20:25.505 Removing: /var/run/dpdk/spdk_pid87088 00:20:25.505 Removing: /var/run/dpdk/spdk_pid87746 00:20:25.505 Removing: /var/run/dpdk/spdk_pid87781 00:20:25.505 Removing: /var/run/dpdk/spdk_pid87816 00:20:25.505 Removing: /var/run/dpdk/spdk_pid88066 00:20:25.505 Removing: /var/run/dpdk/spdk_pid88098 00:20:25.505 Removing: /var/run/dpdk/spdk_pid88133 00:20:25.505 Clean 00:20:25.764 killing process with pid 60096 00:20:25.764 killing process with pid 60097 00:20:25.764 11:16:36 -- common/autotest_common.sh@1446 -- # return 0 00:20:25.764 11:16:36 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:25.764 11:16:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.764 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 11:16:36 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:25.764 11:16:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.764 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 11:16:36 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:25.764 11:16:36 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:25.764 11:16:36 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:25.764 11:16:36 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:25.764 11:16:36 -- spdk/autotest.sh@383 -- # hostname 00:20:25.764 11:16:36 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:26.023 geninfo: WARNING: invalid characters removed from testname! 00:20:47.970 11:16:58 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:51.258 11:17:01 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:53.784 11:17:04 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.314 11:17:06 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:58.235 11:17:09 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:00.773 11:17:11 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.311 11:17:13 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:03.311 11:17:14 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:21:03.311 11:17:14 -- common/autotest_common.sh@1690 -- $ lcov --version 00:21:03.311 11:17:14 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:21:03.311 11:17:14 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:21:03.311 11:17:14 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:21:03.311 11:17:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:21:03.311 11:17:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:21:03.311 11:17:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:21:03.311 11:17:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:21:03.311 11:17:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:03.311 11:17:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:21:03.311 11:17:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:21:03.311 11:17:14 -- scripts/common.sh@339 -- $ ver1_l=2 00:21:03.311 11:17:14 -- scripts/common.sh@340 -- $ ver2_l=1 00:21:03.311 11:17:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:21:03.311 11:17:14 -- scripts/common.sh@343 -- $ case "$op" in 00:21:03.311 11:17:14 -- scripts/common.sh@344 -- $ : 1 00:21:03.311 11:17:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:21:03.311 11:17:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.311 11:17:14 -- scripts/common.sh@364 -- $ decimal 1 00:21:03.311 11:17:14 -- scripts/common.sh@352 -- $ local d=1 00:21:03.311 11:17:14 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:03.311 11:17:14 -- scripts/common.sh@354 -- $ echo 1 00:21:03.311 11:17:14 -- scripts/common.sh@364 -- $ ver1[v]=1 00:21:03.311 11:17:14 -- scripts/common.sh@365 -- $ decimal 2 00:21:03.311 11:17:14 -- scripts/common.sh@352 -- $ local d=2 00:21:03.311 11:17:14 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:03.311 11:17:14 -- scripts/common.sh@354 -- $ echo 2 00:21:03.311 11:17:14 -- scripts/common.sh@365 -- $ ver2[v]=2 00:21:03.311 11:17:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:21:03.311 11:17:14 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:21:03.311 11:17:14 -- scripts/common.sh@367 -- $ return 0 00:21:03.311 11:17:14 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.311 11:17:14 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:21:03.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.311 --rc genhtml_branch_coverage=1 00:21:03.311 --rc genhtml_function_coverage=1 00:21:03.311 --rc genhtml_legend=1 00:21:03.311 --rc geninfo_all_blocks=1 00:21:03.311 --rc geninfo_unexecuted_blocks=1 00:21:03.311 00:21:03.311 ' 00:21:03.311 11:17:14 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:21:03.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.311 --rc genhtml_branch_coverage=1 00:21:03.311 --rc genhtml_function_coverage=1 00:21:03.311 --rc genhtml_legend=1 00:21:03.312 --rc geninfo_all_blocks=1 00:21:03.312 --rc geninfo_unexecuted_blocks=1 00:21:03.312 00:21:03.312 ' 00:21:03.312 11:17:14 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:21:03.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.312 --rc genhtml_branch_coverage=1 00:21:03.312 --rc genhtml_function_coverage=1 00:21:03.312 --rc genhtml_legend=1 00:21:03.312 --rc geninfo_all_blocks=1 00:21:03.312 --rc geninfo_unexecuted_blocks=1 00:21:03.312 00:21:03.312 ' 00:21:03.312 11:17:14 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:21:03.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.312 --rc genhtml_branch_coverage=1 00:21:03.312 --rc genhtml_function_coverage=1 00:21:03.312 --rc genhtml_legend=1 00:21:03.312 --rc geninfo_all_blocks=1 00:21:03.312 --rc geninfo_unexecuted_blocks=1 00:21:03.312 00:21:03.312 ' 00:21:03.312 11:17:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:03.312 11:17:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:03.312 11:17:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.312 11:17:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.312 11:17:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.312 11:17:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.312 11:17:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.312 11:17:14 -- paths/export.sh@5 -- $ export PATH 00:21:03.312 11:17:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.312 11:17:14 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:03.312 11:17:14 -- common/autobuild_common.sh@440 -- $ date +%s 00:21:03.312 11:17:14 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733483834.XXXXXX 00:21:03.312 11:17:14 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733483834.z1gg5i 00:21:03.312 11:17:14 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:21:03.312 11:17:14 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:21:03.312 11:17:14 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:03.312 11:17:14 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:03.312 11:17:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:03.312 11:17:14 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:03.312 11:17:14 -- common/autobuild_common.sh@456 -- $ get_config_params 00:21:03.312 11:17:14 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:21:03.312 11:17:14 -- common/autotest_common.sh@10 -- $ set +x 00:21:03.312 11:17:14 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:03.312 11:17:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:03.312 11:17:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:03.312 11:17:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:03.312 11:17:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:03.312 11:17:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:03.312 11:17:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:03.312 11:17:14 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:03.312 11:17:14 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:03.312 11:17:14 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:03.312 11:17:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:03.312 + [[ -n 5970 ]] 00:21:03.312 + sudo kill 5970 00:21:03.321 [Pipeline] } 00:21:03.338 [Pipeline] // timeout 00:21:03.344 [Pipeline] } 00:21:03.360 [Pipeline] // stage 00:21:03.365 [Pipeline] } 00:21:03.380 [Pipeline] // catchError 00:21:03.390 [Pipeline] stage 00:21:03.393 [Pipeline] { (Stop VM) 00:21:03.407 [Pipeline] sh 00:21:03.687 + vagrant halt 00:21:07.878 ==> default: Halting domain... 00:21:13.164 [Pipeline] sh 00:21:13.442 + vagrant destroy -f 00:21:16.725 ==> default: Removing domain... 00:21:16.737 [Pipeline] sh 00:21:17.015 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:17.024 [Pipeline] } 00:21:17.040 [Pipeline] // stage 00:21:17.046 [Pipeline] } 00:21:17.060 [Pipeline] // dir 00:21:17.065 [Pipeline] } 00:21:17.081 [Pipeline] // wrap 00:21:17.090 [Pipeline] } 00:21:17.102 [Pipeline] // catchError 00:21:17.110 [Pipeline] stage 00:21:17.112 [Pipeline] { (Epilogue) 00:21:17.123 [Pipeline] sh 00:21:17.404 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:22.689 [Pipeline] catchError 00:21:22.691 [Pipeline] { 00:21:22.704 [Pipeline] sh 00:21:22.986 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:23.245 Artifacts sizes are good 00:21:23.254 [Pipeline] } 00:21:23.269 [Pipeline] // catchError 00:21:23.282 [Pipeline] archiveArtifacts 00:21:23.290 Archiving artifacts 00:21:23.453 [Pipeline] cleanWs 00:21:23.464 [WS-CLEANUP] Deleting project workspace... 00:21:23.464 [WS-CLEANUP] Deferred wipeout is used... 00:21:23.470 [WS-CLEANUP] done 00:21:23.472 [Pipeline] } 00:21:23.486 [Pipeline] // stage 00:21:23.492 [Pipeline] } 00:21:23.505 [Pipeline] // node 00:21:23.510 [Pipeline] End of Pipeline 00:21:23.551 Finished: SUCCESS